I thought I would pick up where I left off. That is, still
dealing with the VARK model. Next on my list is Myers-Briggs Type Indicator,
but before getting there, I have a few more papers on VARK to write about.
Today’s post is actually not specific to VARK; in fact, this
paper was written in 1987 which was before VARK existed. However, the paper
evaluated the evidence available at that time for VAK (the precursor to VARK),
so I feel it is relevant enough to write about now.
This paper is a meta-analysis. A meta-analysis is basically
an aggregation of many studies that is then analyzed to find any trends in the
data. In their study, Kavale and Forness put together data from 39 studies in
an attempt to find patterns and make conclusions. The authors focused on
special education classes in this meta-analysis.
Across this study, three patterns of conclusions begin to
emerge: 1) Current (pre-1987) methods used to sort students into visual,
auditory, and kinesthetic learners do not adequately do so, 2) Many studies
find no help for modality matched instruction, and 3) The studies that find benefits
for modality matched instruction find very small benefits and are questionable
from a research perspective. I will elaborate more on each of these points shortly.
Each study they used had a similar format. One group of
students was given instruction that was matched to their preferred learning
modality (either V, A, or K) and another group of students was given “regular
instruction not designed to capitalize on any particular modality.”
Ok, let’s tackle each of the three above points. The first
was that current (pre-1987) methods used to sort students into visual,
auditory, and kinesthetic learners do not adequately do so. Kavale and Forness
found that many of the studies reported a large number of cases where “a
subject not selected for a modality group actually scored higher than a subject
selected on the bases of a modality strength.” This number was 1 in 5 students
across all modalities (V, A, and K) and was 1 in 4 students for the kinesthetic
group specifically. If these students really are being sorted according to
their “learning style,” the authors argue that this is a large number of
improperly sorted students. The authors conclude “… although modality
assessments were presumed to differentiate subjects on the bases of modality
preferences, there was, in actuality, considerable overlap between preference
and non-preference groups.”
Although this study was not specific to VARK, Kavale and
Forness are bringing up the issue of validity of the learning styles assessment
tool. One could definitely argue that VARK may not have these issues.
However,
previous posts on this blog have questioned the validity of VARK as a learning
style assessment tool (see VARK Part 2: More theory or VARK Learning style preferences: group comparisons)
The second point was that many studies find no help for
modality matched instruction. That is, when comparing a group that received
instruction specific to their preferred “learning style” to a group that just
received regular instruction, there was no difference. Kavale and Forness
discuss this at two levels: the whole study level and the individual subject
level. At the whole study level, “The 39 studies reviewed used a variety of
designs and procedures but all testing essentially the same hypothesis:
matching instructional methods to individual modality preferences would enhance
learning. This hypothesis was supported in 13 of the 39 studies (33%), while
67% did not offer support for the modality model.” I believe this quote speaks
for itself.
At the level of the individual subjects, Kavale and Forness
make the following statements:
“Hence, in two-thirds of the cases,
experimental subjects exhibited no gains on standardized outcome assessments as
a result of modality matched instruction.”
“Furthermore… over one third of
subjects receiving instruction matched to their preferred learning modality
actually scored less well than control subjects receiving no special
instruction.”
Again the same pattern is found here. Modality matched
instruction fails to do better than regular instruction a large portion of the
time.
So, what about the studies that do find a difference for
modality matched instruction? Well, as point three (above) states, the studies
that find benefits for modality matched instruction find very small benefits
and are questionable from a research perspective. Throughout their meta-analysis,
Kavale and Forness repeatedly make statements suggesting that the gains that
are seen with modality matched instruction are small. For example,
“…the gain for modality instruction
translates into only a 6 percentile rank improvement. The improvement indicates
that 56% of experimental subjects are better off after modality instruction,
but this is only slightly above chance level (50%) and indicates conversely
that 44% of experimental subjects did not reveal any gain.”
When they looked at modality matched instruction to scores
on standardized tests
“…modality matched instruction
produced gains of anywhere from 4 to 7 percentile ranks. These levels of
improvement are only slightly about chance gains (50%) and suggest that while
approximately 54% to 57% of subjects demonstrated improvement, 46% to 43% of
subjects receiving this special instruction improved less than control
subjects.”
When they examined modality matched instruction and reading
abilities, they conclude
“… modality instruction had only
modest effects on improving reading abilities. Some differences in
effectiveness emerged when instructional methods were matched to modality
preferences, but the positive effects were small. When modality instruction was
evaluated across reading skills, 50% of the comparisons revealed effects that
were not different from zero. Thus, only limited benefits appeared to accrue to
reading skills when instructional practices attempted to capitalize upon
particular modalities that were assumed to be the preferred instructional modes
for individual subjects.”
Notice the similarities throughout these statements. Across
multiple dimensions, the results are strikingly similar. They are either
nonexistent (see point 2 above) or very small. Furthermore, Kavale and Forness
had seven “judges” rate the studies that were used on internal validity. Basically,
internal validity is a measure of the strength of the study (ie. How well were outside
factors controlled?). In research, we strive for studies with high internal
validity. The seven judges largely agreed on their conclusions regarding which
of the studies had strong internal validity and which did not (there was a 93%
inter-rater agreement). Once the studies were sorted for internal validity,
were there any conclusions to be made?
YES! In fact, the studies with the lowest internal validity
produced the largest differences between the modality matched instruction
groups and the control (regular instruction) group. The smallest differences
were found in the group of studies that had the highest levels of internal
validity. Kavale and Forness conclude that “the best studies showed no positive
effects for modality teaching.”
In summary, I believe Kavale and Forness say it best, so I
will just use their final paragraph:
“Although the modality model has
long been accepted as true, the present findings, by integrating statistically
the available empirical literature, disclosed that the modality model is no
effective and efforts would be better directed at improving more substantive
aspects of the teaching-learning process. Both aspects of the modality model,
testing and teaching, appeared problematic. No reliable assignment of subjects
to preferred modality was found, as evidenced by the lack of distinction
between selected groups, and no appreciable gain was found by differentiating
instruction according to modality preference. Consequently, the modality
concept holds little promise for special education, and the present findings
provide the necessary basis for no longer endorsing the modality model, since
learning appears to be really a matter of substance over style.”
References
Kavale, KA, Forness, SR. (1987). Substance over style:
Assessing the efficacy of modality testing and teaching. Exceptional Children 54(3),
228-239.