Tuesday, August 30, 2011
VARK Part 2: More theory
The next article is cowritten by Neil Fleming and David Baume (a higher education consultant). The article includes claims about VARK that appear to be without evidence. In regards to the purpose of VARK, Fleming and Baume claim:
“It [VARK] can also be a catalyst for staff development – thinking about strategies for teaching different groups of learners can lead to more, and appropriate, variety of learning and teaching.”
This statement makes the claim that there are “different groups of learners” but provides no such evidence for their existence. (In fact, both this paper and the previously reviewed paper (Fleming, 1995) don’t include a single citation!) As with the last paper, it appears that the purpose here isn’t to provide evidence in support of the “learning style” but instead to publicize and encourage its use among educators.
I’m a college instructor and a trained researcher. For an assessment to be useful, it should have a handful of qualities according to research standards. Two of these qualities are validity and reliability. Validity is the idea that the questionnaire/assessment tests what it is designed to test. For example, an intelligence test that doesn’t measure intelligence is pretty useless. Here is a statement from this paper regarding the validity of the VARK questionnaire:
“We found that VARK was hard to validate statistically… we just didn’t get a good fit with the data.”
So, if the VARK questionnaire doesn’t measure preferences for modality of communication, what does it measure?
The second quality of a good questionnaire/assessment is reliability, which is basically whether individual scores are stable or fluctuate. How useful would a scale be to a dieter if it lacked reliability and every time it was used it gave a different result unrelated to the person that stepped on it? If this is the case, one might be tempted to simply keep stepping on and off the scale until they get to a number/weight that they like. This article freely admits that VARK questionnaires lack reliability and even suggests that this is a good quality:
“Some learners already know a lot about the way they learn, and need no help from any inventory or questionnaire. For others, doing the VARK questionnaire again and again over time is a worthwhile exercise, even though – maybe because – the scores may vary. VARK works when people find it useful.”
So, what can I learn from the VARK if the next time I take it, my learning preferences change? I’m all for starting a conversation about various topics, and VARK probably inspires a good conversation on the nature of learning in classrooms. Heck, it, and other learning style models, have led to a good conversation on learning theory in psychology communities (the conversation this blog was meant to summarize). But many teachers and students are taking their learning style results as gospel, changing the way the information is presented (if they are a teacher), or changing the way they interact in a classroom (if they are a student). I had a student the other day insist that they sit in the middle of the class because they are a visual learner and they would struggle in the class if they weren’t right in front of the projected notes. Sure this student may prefer to learn through visual presentation, but does that mean that preferential seating should be given to them?
As someone trained in research, the following statement caught my eye:
“When users get their results online, we ask if they think their results are a match to their own perceptions, or don’t match or they don’t know. Those figures run at 59%, 37% and 5%. I know self-perceptions don’t rate highly in research, but I would be worried if those figures were in any other order.”
Fleming and Baume are correct. Self-perceptions don’t rate high in research circles. Consider the fact that when asked about their own intelligence, 85% of people rate themselves as “above average.” Mathematical impossibilities aside, the point is that people are not usually good at judging themselves. In one of my psychology classes, I ask my students to rate themselves on a scale of 1-5 (1 = way below average, 5 = way above average, 3 = average) on various characteristics including attractiveness, physical strength, kindness, and sense of humor. They are instructed to rate themselves compared to their classmates (just in this one particular class). The results have been the same every time I have done this experiment. In all measures, the average student rating is above a 3. That is the class average is always above average. For sense of humor, the class average is usually near a 4. People are all apparently funnier than average. The other stunning thing in these data, is the complete lack of 1’s given. When self-reporting, people don’t ever like to see themselves as the worst.
What’s the point? Self-perceptions are not accurate. However, let’s assume that they are accurate. Look at their numbers: 59% match their own perceptions, 37% don’t match, and 5% don’t know whether they match. Those are not really the type of numbers that inspire huge levels of confidence in my opinion. So, point 1: the data is not useful. And point 2: even if it were useful, their data really don’t indicate that the VARK is very good.
Now, many learning styles theories have been under attack from the psychology research community over the years. To their credit, Fleming and Baume do address this:
“… it [VARK] shouldn’t be used in research; that is not its strength. Its strength lies in its educational value for helping people think about their learning in multiple ways and giving them options they might not have considered. The statistical properties are not stable enough to satisfy the requirements of research, but then, one of our findings is that no one has been able to design an instrument along these lines that does. So VARK is in good company.
Everyone who uses the VARK loves it, and that’s a great thing to be able to say. So it is obviously striking a chord with almost everyone who uses it. We just have to recognize that the constructs of learning style are too varied to pin down accurately and every instrument I’ve ever considered suffers from this same issue.”
This is a scary statement. If research doesn’t support that learning styles exist, why give resources and energy to anything learning styles related? Research should be able to shed light on learning styles. If a visual learner truly does better when presented with material visually, then conduct an experiment where you present them with material visually and aurally and compare the results. Of course, a good experiment will be more complex than that, but learning styles are not outside the realm of research.
This post is getting a little long, so here is my last piece. The following are more claims made in the paper:
“Modal preferences influence individuals’ behaviors, including learning”
“Preferences can be matched with strategies for learning. There are learning strategies that are better aligned to some modes than others. Using your weakest preferences for learning is not helpful; nor is using other students’ preferences”
“the use of learning strategies that are aligned with a modality preferences is also likely to lead to persistence learning tasks, a deeper approach to learning, active, and effective metacognition.”
“knowledge of, and acting on, one’s modal preferences is an important condition for improving one’s learning”
These claims need to be backed by evidence and in these papers no evidence is provided. Just because I prefer to learn a certain way does not necessarily translate into the idea that I learn better that particular way.
Fleming, N., and Baume, D. (2006) Learning Styles Again: VARKing up the right tree!, Educational Developments, SEDA Ltd, Issue 7.4, Nov. 2006, p4-7.
Fleming, N.D; (1995), I'm different; not dumb. Modes of presentation (VARK) in the tertiary classroom, in Zelmer,A., (ed.) Research and Development in Higher Education, Proceedings of the 1995 Annual Conference of the Higher Education and Research Development Society of Australasia (HERDSA),HERDSA, Volume 18, pp. 308 – 313.