By Daniel T. Willingham
I concluded that many teachers believe learning-styles theory is accurate in about 2003. It was perhaps the second or third time I had given a public talk to teachers. I mentioned it in passing as an example of a theory that sounds plausible but is wrong, and I felt an immediate change in the air. Several people said “Wait, what? Can you please back up a slide?”
Since then I’ve written a couple of articles about learning styles, created a video on the subject, and put an FAQ on my website. Last week I was on NPR’s Science Friday radio program (with Kelly Macdonald and Lauren McGrath) to talk about learning styles and other neuromyths.
I put energy into dispelling the learning styles myth because I thought that audience of educators was representative—that is, that most teachers think the theory is right. But with the exception of one recent study showing that academics often invoke learning styles theory in in professional journal articles, there haven’t been empirical data on how widespread this belief is in the U.S.
Now there are.
Macdonald, McGrath, and their colleagues conducted a survey to test the pervasiveness of various beliefs about learning among a sample of 3,048 American adults and 598 educators. Similar surveys have been conducted in parts of Europe, East Asia, and Latin American, where researchers have observed high levels of inaccurate beliefs on these issues.
Learning styles theory was endorsed by 93 percent of the public and 76 percent of educators. Data regarding other neuromyths (common misperceptions about learning or the brain) are shown in the table below (from the paper).
As the authors acknowledge, there are limitations to the interpretation, in particular regarding the sample. The subjects were visitors to the site TestMyBrain.org, and so it’s difficult to know how they differed from a random sample. Still, neuromyths were endorsed at rates similar to those observed in other countries.
Why is acceptance of the idea so high? No one really knows, but here’s my tripartite guess.
First, I think by this point it’s achieved the status of one of those ideas that “They” have figured out. People believe it for the same reason I believe atomic theory. I’ve never seen the scientific papers supporting it (and wouldn’t understand them if I had), but everyone believes the theory and my teachers taught it to me, so why would I doubt that it’s right?
Second, I think learning styles theory is widely accepted because the idea is so appealing. It would be so nice if it were true. It predicts that a struggling student would find much of school work easier if we made a relatively minor change to lesson plans—make sure the auditory learners are listening, the visual learners are watching, and so on.
Third, something quite close to the theory is not only right, it’s obvious. The style distinctions (visual versus auditory; verbal versus visual) often correspond to real differences in ability. Some people are better with words, some with space, and so on. The (incorrect) twist that learning styles theory adds is to suggest that everyone can reach the same cognitive goal via these different abilities; that if I’m good with space but bad with words (or better, if I prefer space to words), you can rearrange a verbal task so that it plays to my spatial strength.
That’s where the idea goes wrong. First, the reason we make the distinction between types of tasks is that they are separable in the brain and mind; we think verbal and visual are fundamentally different, not fungible. Second, while there are tasks that can be tackled in more than one way, these tasks are usually much easier when done in one way or another. For example, if I give you a list of concrete nouns, one at a time, and ask you to remember them, you could do this task verbally (by repeating the word to yourself, thinking of meaning, etc.) or visually (by creating a visual mental image). Even for people who are not very good at imagery, the latter method is a better method of doing the task. Josh Cuevas has an article showing this point coming out early next year: People’s alleged learning styles don’t count for anything in accounting for task performance, but the effect of a strategy on a task are huge.
A final note: I frequently hear from teachers that they learned about the theory in teacher education classes. I've looked at all of the well-known educational psychology textbooks, and none of them present the idea as correct. But neither do they debunk it. Teachers are, according to the survey, more accurate than the general public in their beliefs about learning, but they should be way ahead. Debunking these ideas in ed psych textbooks ought to help.
Daniel Willingham is Professor of Psychology at the University of Virginia. He is the author of Why Don't Students Like School?, When Can You Trust the Experts?, Raising Kids Who Read, and The Reading Mind (forthcoming). In 2017 he was appointed by President Obama to serve as a Member of the National Board for Education Sciences.
Editor’s note: A version of this article was originally published on Dr. Willingham’s blog, “Daniel Willingham—Science & Education.”
The views expressed herein represent the opinions of the author and not necessarily the Thomas B. Fordham Institute.
The New York Times ran an interminable front-page piece on Sunday raising doubts about the ethics and propriety of teachers who promote commercial products, especially those from big tech firms like Apple and Google, for use by other teachers and their schools. The example that reporter Natasha Singer focused on—“one of the tech-savviest teachers in the United States”—is an ace third grade teacher named Kayla Delzer, whose classroom is in the hamlet of Mapleton, North Dakota. Her brand is Top Dog Teaching, and she does indeed promote a wide range of instructional strategies and commercial products that range from her own line of tee shirts, to books and newsletters she’s written, to plugs for corporate products like the “itslearning” classroom management system.
That Ms. Delzer is a multi-tasking dynamo is not in dispute, nor is her instructional prowess. What the Times found a bunch of “experts” to huff about is the propriety of public-school teachers serving as “ambassadors” for the corporate world—and getting compensated in various ways for doing so.
It’s not a trivial issue—and never is when professionals who are presumably looking after the best interests of those they serve are engaged by outside interests to promote products and services sold by those interests. The most familiar version of this is when physicians are wooed and rewarded by pharmaceutical companies and end up both prescribing the products of those firms more often than might be medically indicated, as well as boosting those products to other doctors, medical students, and patients. As the Times notes, “some academic medical centers now prohibit their doctors from giving industry-sponsored speeches. And some drug companies have stopped giving doctors swag.”
The suggestion posed by the article is that there should be more “public discussion about the ramifications of similar tech-industry cultivation of teachers.”
Sure there should be such “discussion.” But as we start to huff and puff about it, let’s bear a few things in mind.
First, there’s absolutely nothing new about educators promoting commercial products—and getting compensated in various ways for doing so. That’s what happens when salesmen for textbook companies treat school superintendents to golf games and nice lunches, after which the district buys their textbooks. That’s what happens at every education conference I’ve ever attended when attendees are given lots of time to wander through vast halls full of promotions, freebies, and come-ons by the dozens (or hundreds) of conference “sponsors,” i.e., the firms that are underwriting the event itself. That’s what happens when those same firms take out ads in magazines and newsletters subscribed to by teachers and principals—or sent to them as a benefit of union membership. Speaking of which, check out the NEA website and you’ll find leads to innumerable commercial products that are recommended to teachers by other teachers.
Second—pointing out the obvious—we don’t do a very good job of compensating teachers in America and many find they must supplement their incomes in various ways. Ethically and morally, what’s the difference between a teacher who promotes a Google or Microsoft product after school and during vacations, and one who promotes Tupperware, Mary Kay cosmetics, or a particular summer camp? True, the former category includes items that may be used in classrooms, and teachers who promote them should signal to their peers whether they’re being compensated by the company for doing so, but it’s hard to see an ethical concern that can’t be dealt with via transparency.
Third—and not much discussed—shouldn’t we get just as exercised about teachers who promote unproven or even harmful pedagogical ideas, such as “multiple intelligences,” “whole language” reading, and “fuzzy” math? They’re not only jeopardizing the future of children in classrooms led by other teachers who heed their counsel; unlike Ms. Delzer, they’re also ill-serving their own pupils! One of the issues raised in the long Times article is that “there is little rigorous research showing whether or not the new technologies [such as those embraced by “ambassadors” like Ms. Delzer] significantly improve student outcomes.” Fair enough. But we have tons of rigorous research showing that some instructional strategies do improve student outcomes and others do not. How are we to view teachers who employ the latter kind—and who encourage others to employ them, too?
In response to widespread fears that too many students would fail to pass the state’s seven high school End Of Course (EOC) tests, Ohio lawmakers recently created additional graduation pathways for the class of 2018. The pathway generating the most discussion allows students to receive a diploma by completing two of nine alternative measures, one of which is earning at least a 2.5 grade point average (GPA) during their senior year.
State Superintendent Paolo DeMaria has defended the inclusion of GPAs as one of the options, saying that “a GPA increasingly both in research and in practice has been shown to be a far better indicator of a student’s readiness for college success and frankly for workforce success than any standardized test.”
DeMaria is partly right. Several analyses have found a link between students’ high school grade average and their college success. For instance, this study from the Institute of Education Sciences (IES) found not only that high school GPAs are an “extremely good and consistent predictor of college performance,” but also that they “encapsulate all the predictive power of a full high school transcript in explaining college outcomes.” These other two studies found that a student’s GPA and her composite ACT score were more predictive than either option alone. (There is little research evidence linking GPAs and workforce success.)
Yet GPAs pose several puzzles. No one denies that a student’s work ethic and level of investment matter—and influence achievement. GPAs can give an indication of the non-cognitive skills that a student will need to be successful at the next level. But before anyone declares the superior predictive power of GPAs and the wisdom of using this to determine whether a student should receive a high school diploma, some important nuances in the research need to be addressed. Consider:
- The focus on college attenders. The majority of studies that gauge whether high school GPA predicts college success do so by looking at how students with various school GPAs perform once they get to college. Yet this approach ignores a huge part of the student population—the students who don’t enroll in college and thus don’t have postsecondary outcomes to compare to their high school GPA. Quite honestly, most Ohio students going to college after high school would comfortably satisfy the state’s EOC exam requirements anyway. Without an accurate way to measure whether GPAs predict success for non-college bound students, it’s difficult to say how useful GPA is as a graduation requirement.
- Not all standardized tests were considered. The majority of studies that examine whether GPAs predict college success use placement tests (like ACCUPLACER) or readiness tests (like the ACT or SAT) as their points of comparison. Some studies use state tests, but those come from other states. No study yet examines how well Ohio’s EOCs can predict success at the collegiate level. Without such a study, it’s impossible to say that GPAs are a better predictor of readiness than the standardized tests that Buckeye students must take.
- The difficulty of predicting workforce readiness. Ohio graduates work in thousands of different jobs in dozens of fields. Because each field and role is unique and complex, it’s difficult to determine on a broad scale whether students are adequately prepared for the workforce. There doesn’t seem to be any research that suggests that Ohio students’ GPAs can predict their workforce success despite DeMaria’s statement that GPA is a better predictor. In fact, the only clear data Ohio seems to have on gauging career readiness come from the WorkKeys assessment—a standardized test that the Superintendent recently recommended be eliminated!
Along with these important nuances, one more issue should make policymakers wary of putting too much emphasis on GPAs: their susceptibility to gaming, even when they are a low-stakes measure. One way this occurs is through grade inflation—giving students higher grades than their performance honestly warrants. I taught plenty of students who came to my class with high GPAs, worked very hard, and turned in all their assignments—yet still could not read and write proficiently. If their readiness for college and career were based entirely on their GPAs, they would have been considered well-prepared. In reality, they were years behind.
Kids who are academically behind aren’t the only ones who see inflated grades. A recent article in The Atlantic found that students enrolled in private and suburban public high schools are awarded higher grades than their urban peers despite similar levels of talent and potential. In fact, between 1998 and 2016, the GPAs of students at private and suburban public high schools went up even as scores on the SAT went down.
When it comes to relying on GPAs alone as indicators of student readiness and downplaying the role of standardized tests, policymakers should tread cautiously. Research certainly indicates that high school grades are an important predictor of college success, but many of these studies overlook the students who will enter the military or the workforce and also need to have a strong academic foundation. Furthermore, without research that specifically links GPAs to workforce readiness and performance, we can’t accurately argue that GPAs predict better than tests like WorkKeys, i.e. tests specifically designed to measure career preparedness.
In an era of rampant grade inflation, it’s irresponsible for state leaders to push aside objective measures that can gauge a student’s academic preparedness and that are not entirely within the control of that student’s teachers. Standardized tests don’t tell us everything we need to know about student potential, achievement, or ability, but they do shine an objective beam on what students know and are able to do. As the old saying goes, it’s important to trust but verify—and good assessments are a key part of verifying that students’ grades align with what they’ve actually learned and are able to do.
On this week's podcast, special guest David Osborne, a director at the Progressive Policy Institute, joins Mike Petrilli and Alyssa Schwenk to discuss his new book, Reinventing America’s Schools. During the Research Minute, Amber Northern examines a blockbuster study finding that the over-identification of minority children in special education is a myth.
Amber’s Research Minute
Paul L. Morgan et al., “Replicated Evidence of Racial and Ethnic Disparities in Disability Identification in U.S. Schools,” Educational Researcher (August 2017).
Families who live in urban areas routinely cite school safety as one of their key reasons for seeking out a charter school. What we don’t know with any certainty is whether charter schools actually are any safer than traditional schools.
Enter a new report from the American Educational Research Journal that examines school safety in charter and traditional schools. Analysts focus their study on Detroit, a city with an alarmingly high rate of crime and poverty. Tragically, in 2013, 55 percent of Detroit high schoolers reported being a victim of violence, and 87 percent reported having a relative or friend shot, murdered, or disabled by violence in the past twelve months. In response, the Detroit Public Schools established a school district police department and assigned roughly two hundred police officers and security personnel to work in the city’s traditional public schools.
Nearly half of the all students living in Detroit attend a charter school. Approximately 91 percent of them are African American and 87 percent are economically disadvantaged, compared to 86 percent and 79 percent, respectively, in the city’s traditional public schools. Analysts link student-reported data on school safety from 2014 and 2015 (how safe one feels in the bathrooms of the school, in the hallways, in classes, etc.) with school characteristics, student demographics, average commuting distance to school, parental involvement, and neighborhood characteristics, such as reported crime and the structural vacancy rates for city buildings. Schools were divided into categories based on how far students, on average, commuted to them. A “commuter” school was farther away (an average of 2.5 miles or more for elementary school); a “neighborhood” school was closer (average of 2.5 miles or less).
Data gathered through the Detroit Police Department show that traditional public schools post higher rates of both reported crime and violent crime in school than do charter schools. Initial findings also show that charter schools exhibited higher perceived safety than traditional public schools (0.68 SD higher). Yet once controls are added for student commute distance and parental involvement—which seek to control for self-selection bias (e.g., more motivated parents may be disproportionately attracted to charter schools, and willing to travel far distances)—these perceived differences mostly go away and are no longer statistically significant. The one exception is “neighborhood” charter schools (those in which most of the students live close by), which maintained their higher perceived safety even after the controls were applied. Analysts posit this may be due to school strategies such as “highly structured learning environments and strict enforcement of behavioral codes,” but that’s speculative given that the study design and controls aren’t robust enough to completely rule out selection bias. Finally, neighborhood crime and structural vacancy were unrelated to perceived school safety, perhaps because so few low-crime neighborhoods exist where schools might locate.
While researchers may care that “controls” washed out the differences in students’ “perception” of safety in many cases, the fact remains that traditional public schools had higher actual rates of both reported crime and violent crime in schools than did charter schools. So they are safer, according to police records. We don’t know why that is (perhaps because of the families choosing them), but we suspect that’s what matters most to parents.
SOURCE: Daniel Hamlin, “Are Charter Schools Safer in Deindustrialized Cities With High Rates of Crime? Testing Hypotheses in Detroit,” American Educational Research Journal (May 2017).
A study published last month by Hugh Macartney of Duke University and John Singleton of the University of Rochester examines how the political composition of school boards in North Carolina is affecting segregation.
They consider elementary schools under the purview of 109 school boards across the state from 2008–2013. Year-to-year changes in school attendance zones and segregation rates are then correlated with the election of Democratic school board members.
They find that an increase in the proportion of Democrats on an elected school board was associated with a significant decrease in racial segregation in those district’s schools. When Democrats gained a majority on a school board, for example, racial segregation decreased by as much as 18 percent. And when Democrats are elected to school boards—regardless of whether this created a Democratic majority—changes in school assignments increased by 0.19 standard deviations over the following five-year period. In other words, students switched schools within that district at a greater rate—due perhaps to things like changed attendance boundaries, the introduction of controlled choice programs, or other efforts to integrate the schools. (Note, however, that determining specific causes for the observed changes is beyond the scope of the study.)
Macartney and Singleton also find that a greater Democratic presence on a school board is correlated with a decrease in the proportion of white students in that district—which, because the external boundaries of the district aren’t changing, suggests that white families are leaving the district or opting for private schools. This was especially true when an election caused a newly Democratic majority, with an average 6-percentage point decrease in the proportion of white students.
But, analysts note, their results do not indicate that Republicans are actively working to increase segregation in schools—they’re simply more likely to leave school assignments as is
So what to make of all of this? As Democrats vote to integrate the schools, some whites vote with their feet and go elsewhere. As historians can tell you, this is not a new story.
SOURCE: Hugh Macartney and John D. Singleton, “School Boards and Student Segregation,” NBER (July 2017).