Monday, December 17, 2012

A meta-analysis of VAK (pre-VARK)

Greetings- it has been a LONG time, but I am back with a new post. I apologize for the delay here and hope to have semi-regular posts to this blog again.

I thought I would pick up where I left off. That is, still dealing with the VARK model. Next on my list is Myers-Briggs Type Indicator, but before getting there, I have a few more papers on VARK to write about.
Today’s post is actually not specific to VARK; in fact, this paper was written in 1987 which was before VARK existed. However, the paper evaluated the evidence available at that time for VAK (the precursor to VARK), so I feel it is relevant enough to write about now.

This paper is a meta-analysis. A meta-analysis is basically an aggregation of many studies that is then analyzed to find any trends in the data. In their study, Kavale and Forness put together data from 39 studies in an attempt to find patterns and make conclusions. The authors focused on special education classes in this meta-analysis.

Across this study, three patterns of conclusions begin to emerge: 1) Current (pre-1987) methods used to sort students into visual, auditory, and kinesthetic learners do not adequately do so, 2) Many studies find no help for modality matched instruction, and 3) The studies that find benefits for modality matched instruction find very small benefits and are questionable from a research perspective. I will elaborate more on each of these points shortly.

Each study they used had a similar format. One group of students was given instruction that was matched to their preferred learning modality (either V, A, or K) and another group of students was given “regular instruction not designed to capitalize on any particular modality.”

Ok, let’s tackle each of the three above points. The first was that current (pre-1987) methods used to sort students into visual, auditory, and kinesthetic learners do not adequately do so. Kavale and Forness found that many of the studies reported a large number of cases where “a subject not selected for a modality group actually scored higher than a subject selected on the bases of a modality strength.” This number was 1 in 5 students across all modalities (V, A, and K) and was 1 in 4 students for the kinesthetic group specifically. If these students really are being sorted according to their “learning style,” the authors argue that this is a large number of improperly sorted students. The authors conclude “… although modality assessments were presumed to differentiate subjects on the bases of modality preferences, there was, in actuality, considerable overlap between preference and non-preference groups.”

Although this study was not specific to VARK, Kavale and Forness are bringing up the issue of validity of the learning styles assessment tool. One could definitely argue that VARK may not have these issues. 
However, previous posts on this blog have questioned the validity of VARK as a learning style assessment tool (see VARK Part 2: More theory or VARK Learning style preferences: group comparisons)

The second point was that many studies find no help for modality matched instruction. That is, when comparing a group that received instruction specific to their preferred “learning style” to a group that just received regular instruction, there was no difference. Kavale and Forness discuss this at two levels: the whole study level and the individual subject level. At the whole study level, “The 39 studies reviewed used a variety of designs and procedures but all testing essentially the same hypothesis: matching instructional methods to individual modality preferences would enhance learning. This hypothesis was supported in 13 of the 39 studies (33%), while 67% did not offer support for the modality model.” I believe this quote speaks for itself.

At the level of the individual subjects, Kavale and Forness make the following statements:
“Hence, in two-thirds of the cases, experimental subjects exhibited no gains on standardized outcome assessments as a result of modality matched instruction.”
“Furthermore… over one third of subjects receiving instruction matched to their preferred learning modality actually scored less well than control subjects receiving no special instruction.”
Again the same pattern is found here. Modality matched instruction fails to do better than regular instruction a large portion of the time.

So, what about the studies that do find a difference for modality matched instruction? Well, as point three (above) states, the studies that find benefits for modality matched instruction find very small benefits and are questionable from a research perspective. Throughout their meta-analysis, Kavale and Forness repeatedly make statements suggesting that the gains that are seen with modality matched instruction are small. For example,
“…the gain for modality instruction translates into only a 6 percentile rank improvement. The improvement indicates that 56% of experimental subjects are better off after modality instruction, but this is only slightly above chance level (50%) and indicates conversely that 44% of experimental subjects did not reveal any gain.”

When they looked at modality matched instruction to scores on standardized tests
“…modality matched instruction produced gains of anywhere from 4 to 7 percentile ranks. These levels of improvement are only slightly about chance gains (50%) and suggest that while approximately 54% to 57% of subjects demonstrated improvement, 46% to 43% of subjects receiving this special instruction improved less than control subjects.”

When they examined modality matched instruction and reading abilities, they conclude
“… modality instruction had only modest effects on improving reading abilities. Some differences in effectiveness emerged when instructional methods were matched to modality preferences, but the positive effects were small. When modality instruction was evaluated across reading skills, 50% of the comparisons revealed effects that were not different from zero. Thus, only limited benefits appeared to accrue to reading skills when instructional practices attempted to capitalize upon particular modalities that were assumed to be the preferred instructional modes for individual subjects.”

Notice the similarities throughout these statements. Across multiple dimensions, the results are strikingly similar. They are either nonexistent (see point 2 above) or very small. Furthermore, Kavale and Forness had seven “judges” rate the studies that were used on internal validity. Basically, internal validity is a measure of the strength of the study (ie. How well were outside factors controlled?). In research, we strive for studies with high internal validity. The seven judges largely agreed on their conclusions regarding which of the studies had strong internal validity and which did not (there was a 93% inter-rater agreement). Once the studies were sorted for internal validity, were there any conclusions to be made?

YES! In fact, the studies with the lowest internal validity produced the largest differences between the modality matched instruction groups and the control (regular instruction) group. The smallest differences were found in the group of studies that had the highest levels of internal validity. Kavale and Forness conclude that “the best studies showed no positive effects for modality teaching.”


In summary, I believe Kavale and Forness say it best, so I will just use their final paragraph:
“Although the modality model has long been accepted as true, the present findings, by integrating statistically the available empirical literature, disclosed that the modality model is no effective and efforts would be better directed at improving more substantive aspects of the teaching-learning process. Both aspects of the modality model, testing and teaching, appeared problematic. No reliable assignment of subjects to preferred modality was found, as evidenced by the lack of distinction between selected groups, and no appreciable gain was found by differentiating instruction according to modality preference. Consequently, the modality concept holds little promise for special education, and the present findings provide the necessary basis for no longer endorsing the modality model, since learning appears to be really a matter of substance over style.”

References
Kavale, KA, Forness, SR. (1987). Substance over style: Assessing the efficacy of modality testing and teaching. Exceptional Children 54(3), 228-239.


7 comments:

  1. Thank you for your analysis. In reading your comments, two questions came to mind. Since all of those in the studies would have been tested, would the results not have been impacted in some way by observation effect? Might those in the group given “regular instruction” have performed better because they thought they were expected to do so? Also, did all of the studies reviewed focus only on the teacher’s matching instruction to modality, or was some value to the learner found in using the assessment as a tool for strengthening individual learning skills?

    ReplyDelete
    Replies
    1. bob, Your questions are good. I also was wondering how control variables were assessed. Was there any comment made about the skill of the teacher or the strength of both curricula (modal and non-modal). I think these kinds of findings are quite significant. I see echoes of them in some of the other literature I have read, but nobody seems very willing to admit that we may be on a faulty track. The intuitive value of modal learning is just so attractive, and yet perhaps we are back to assessing teachers' skill and students' efforts instead of objectifiable data.

      Delete
    2. This comment has been removed by the author.

      Delete
    3. If I have understood you properly, I agree that the assessment relies at least to some extent on subjective interpretation, leading us "back to assessing teachers skills and student's efforts instead of objectifiable data."

      Delete
    4. You ask really good questions Bob. I would say that it is very difficult to validate whether learning styles actually work because there are many variables that are in place in the process of learning. For example, the background of the learner, the level of difficulty of the subject, the emotional state of the learner, and the place where learning is happening. Studies might or might not take into consideration these variables which make the task of combining studies even more difficult.

      I think that the concept of learning styles is very evident. My opinion is that students learn best if they are taught in a way that they have already learned to learn. It is a matter of preference. This does not exclude students from learning in different ways; learning styles just provide the wording for speaking about the ways that students prefer (or are used) to learn. It is true that there are other variables in place; but this does not exclude the possibility of learning styles being one of the variables.

      Delete
  2. I appreciate the conversation so far. Learning styles hold an appeal for many because of their intuitive nature, but are hard to validate through quantitative means. However, currently lacking evidence should not automatically cause dismissal of learning style theory.

    Tony makes a good point about other important variables that are not always considered. Knewton recently blogged about multiple data points to consider (http://www.knewton.com/blog/ceo-jose-ferreira/rebooting-learning-styles/)

    "the amount of content covered per session, the format of the learning experience (text vs. video vs. game vs. physical simulation vs. group discussion, etc.), the difficulty level of prose explaining a given concept, the difficulty of accompanying practice questions, the time of day, whether content contains mnemonic devices, whether it confuses cause and effect, whether it makes use of lists, student attention span, student engagement with particular learning content, strategic modalities (e.g., does the content define a procedure vs. address a common misconception vs. use a concrete example?), the presence or absence of learning aids (e.g., hints), user-specific features (e.g., difficulty relative to the student’s current proficiency), and many, many more."

    I would agree that learning style and modality matching do not provide the full picture for the teaching-learning process. Qualitative feedback can describe preference but quantifying results, such as modality matching, has proven difficult. It may be that learning styles (VAK, VARK), or perhaps multiple intelligences, provide a tool of pragmatism in teaching that is difficult to verify but useful nonetheless.

    ReplyDelete
  3. Learning styles vary from person to person. Hence, understanding your learning style could become quite crucial in the long run.

    ReplyDelete