Editor's note: This post is the sixth entry of a multi-part series of interviews featuring Fordham's own Andy Smarick and Jack Schneider, an assistant professor of education at Holy Cross. It originally appeared in a slightly different form at Education Week's K-12 Schools: Beyond the Rhetoric blog. Earlier entries can be found here, here, here, here, and here.
Schneider: In our previous post, you implied—through one of your fictional stories—that research could be used in the courts to establish particular policy positions, and I'd like to follow up on that.
I'm perpetually frustrated by the fact that, for every complex issue, there is competing research to cite. It's a real dilemma for which I don't really see a solution. Maybe we can talk through this a bit.
Smarick: I actually see the vast majority of research as complementary, not competing.
Studies on the same subject often ask different questions, use different data sets, and have different methodologies. So if you only read the titles, you might think two reports are in conflict; but once you get into the details, you see that they paint a fuller picture of some issue when taken together. Let me give you just one very simple example.
Some research shows that early-childhood programming can help disadvantaged kids show up for kindergarten much better prepared to learn. Other research shows that some of these programs aren't effective and that, in lots of cases, the benefits of pre-K can wear off somewhere down the line (say, when the students hit third, fourth, or fifth grade).
All of that can be true. That is, not all early-learning programs are of the same quality. Some are amazing, and some are the opposite of amazing. And if kids go from a great pre-K program to a low-performing elementary school, their achievement gains understandably fade. I don't look at those studies as contradictory; I put them together and say, "When we do have early-learning programs, let's do our utmost to ensure they are of high quality, and let's make sure the receiving elementary schools are ready to help these kids continue to succeed."
With all of that said: Yes, of course there are low-quality studies that produce surprising results because, well, the studies aren't done very well. And it's also the case that consumers of research, if they only read executive summaries or purposefully spin findings to fit their own narratives, misinterpret what research is telling us.
But for the most part, I find that smart, curious, discriminating consumers of research use academic literature to build thorough, sophisticated, nuanced views of tough questions.
Schneider: I actually agree with you, though I'd say problems are less a product of low-quality studies and more a result of something else—searching for solutions in a field where there aren't any. Education is so complex that there are no clear policy routes. Sure, one study may offer a clear direction. But when you take them collectively, you're going to find something much messier—and much harder to craft policy around.
Take class size, for instance.
The research on class size indicates that blanket reductions don't produce much measurable change. Yet there is also research that class size matters. So how do we make sense of this? Well, as with most things in education, it depends how it's done. If you just reduce class size across the board, the first thing you're going to have to do is hire a lot of new teachers—many of whom will be of lower quality than those already employed. You're also going to have the majority of teachers working exactly as they did before because, without professional development around how to leverage their smaller classes, many teachers will simply do what they've always done. Finally, it's worth mentioning the fact that the effects of smaller classes are often measured only by standardized test scores, which obviously miss a lot of the potential impact.
You can substitute almost any topic for class size here. School funding. Teacher training. Whole school reform.
The challenge, then, is to try to craft policy that recognizes all of this nuance. Nuance is a tough sell, though—much tougher than something simpler and more straightforward. And unfortunately, there are those who cherry-pick individual studies to push particular agendas.
In short, those trying to deeply engage with the research are at a disadvantage over those with a louder and simpler message. And I think the latter group constitutes a majority.
Smarick: I think you are being overly sympathetic to researchers and unfair to policy leaders.
A researcher producing a paper ?can succeed with a low p-value, a high R-square, or an acceptance note from an academic journal. ?
Policymakers, meanwhile, have to make extraordinarily difficult choices that influence hundreds, thousands, or even millions of kids. These decisions take place in the complicated context of authorization schedules, appropriations cycles, budget revisions, committee markups, OMB circulars, IG findings, GAO reports, court orders, statutory text, regulatory language, guidance documents, civil service rules, union contracts, procurement processes, and much, much more.
My point is that if there's a disconnect between research and policy, then ?academics, many of whom know ?too ?little about the actual work of policymaking, need to shoulder some of the blame?. ?I've come across ?too many researchers ?who ?seem to believe the work is done once the regression is constructed and SPSS spits out a statistically significant result.
?That's a terrific start. But it's just the start.
Schneider: Is there weak research out there? Absolutely. But shoddy work isn't fooling anyone, at least not in the scholarly community. And you're right: Any scholar who thinks that producing a study is enough to make an impact has a lot to learn. My own research supports that fact. Still, research doesn't get misused because particular studies are weak. It gets misused because those seeking to effect policy change use research to fortify, rather than to construct, their arguments. I think that's irresponsible.
I'd also like to make an observation here about what "counts" as research. You mention p-values, R-squares, and regression—all of which implies an orientation toward quantitative research. And while I don't dismiss the value of quantitative research, I think it's important to note its limitations. There's a lot that can't be measured quantitatively; and that stuff also matters when you're trying to piece together an accurate picture of reality. Additionally, though quantitative analysis may seem more objective than qualitative work, it really isn't. The subjective decisions just get made at different stages in the research process.
Smarick: Obviously we disagree about the work of policy leaders. But we do definitely agree on the importance of qualitative research.
Now, I think every aspiring policy person should be conversant in statistics, and the history and lessons of the famous Brandeis Brief should be a requirement. So I'd never argue against numbers.
But the centuries-spanning debate of Hobbes, Locke, Rousseau, Madison, and Hamilton wasn't premised on math. The arguments of Democracy in America,? Common Sense, Uncle Tom's Cabin, The Narrative of the Life of Frederick Douglass, The Jungle, 1984, The Road to Serfdom, and A Theory of Justice are decidedly
The interesting policy challenge, though, is that it's actually a whole lot easier to explain the implications of a regression variable's coefficient than why Nozick won the National Book Award for brilliantly challenging Rawls's Second Principle of Justice.