We at Fordham recently released an evaluation on Ohio’s largest voucher initiative—the EdChoice Scholarship. The study provides a much deeper understanding of the program and, in our view, should prompt discussion about ways to improve policy and practice. But this evaluation also means that EdChoice is an outlier among the Buckeye State’s slew of education reforms: Unlike the others, it has faced research scrutiny. That should change, and below I offer a few ideas about how education leaders can better support high-quality evaluations of education reforms.
In recent years, Ohio has implemented policies that include the Third Grade Reading Guarantee, rigorous teacher evaluations, the Cleveland Plan, the Straight A Fund, New Learning Standards, and interventions in low-performing schools. Districts and schools are pursuing reform, too, whether changing textbooks, adopting blended learning, and implementing professional development. Millions of dollars have been poured into these initiatives, which aim to boost student outcomes.
But very little is known about how these initiatives are impacting student learning. To my knowledge, the only major state-level reforms that have undergone a rigorous evaluation in Ohio are charter schools, STEM schools, and the EdChoice and Cleveland voucher programs. To be certain, researchers elsewhere have studied policies akin to those adopted in Ohio (e.g., evaluations on retention in Florida). Such studies can be very useful guides, and they might even inspire Buckeye leaders to find out what all the fuss is about. At the same time, it is critical to gather evidence on what works in our own state. Local context and conditions matter.
The research void means that we have no real understanding about whether Ohio’s education reforms are lifting student outcomes. Is the Third Grade Reading Guarantee improving early literacy? We don’t know. Have changes to teacher evaluation increased achievement? There isn’t much evidence on that either, at least not in the Buckeye State. The startling lack of information is not a problem unique to Ohio, but it does put us in a tough situation. We have practically no way of gauging whether course corrections are needed (if the results are null or negative) or if a program should be abandoned when consistently adverse impacts are uncovered. Neither do we know which approaches should be replicated or expanded based on positive findings.
Evaluation is no easy task, and there may be legitimate reasons why researchers haven’t turned a spotlight on Ohio’s reforms. Some are very new, and the time might not be ripe for a study. Moreover, there may not be a straightforward way to analyze a particular program’s impact. Only in rare cases can researchers conduct an experimental study that yields causal estimates. These include programs with admissions lotteries (due to oversubscription), as well as cases in which schools implement an experimental program by design. Even then, however, there are limitations. When such studies aren’t feasible, competent researchers can utilize rigorous quasi-experimental methods; yet given the data or policy design, isolating the impact of a specific program can be challenging. And further barriers may exist in the simple lack of funding or political will.
Policy makers can help to overcome some of these barriers by creating an environment that is more favorable to research and evaluation. Here are three thoughts on how to do this:
- Create small-scale pilots that provide sound evidence and quick feedback. Harvard professor Tom Kane suggests that there is an “urgent need for short-cycle clinical trials in education.” I agree. In Ohio, perhaps it could look something like this: On the Third Grade Reading Guarantee, the state could incentivize a group of districts to randomly assign their “off-track” students to different reading intervention programs. A researcher could then investigate the outcomes a year later, helping us learn something about which program holds the most promise. (It would be good to know the costs of each intervention as well.) In the case of ESSA interventions for Ohio’s lowest-performing schools, the state could encourage districts to randomly assign certain strategies to specific schools and then examine the results. Granted, these ideas would need some fleshing out. But the point is that designing policies with research pilots in mind would sharpen our understanding of promising practices.
- Make collecting high-quality data a top priority. To its great credit, Ohio has developed one of the most advanced education information systems in the nation. For example, the state is among just a few that gather information on pupils in gifted programs. But the state and schools can do more, particularly around the reporting of course-level data that can support larger-scale research on curriculum and instruction. For instance, we’ve noticed some apparent gaps in the way AP course taking is documented. Another area in which Ohio can blaze new paths is the accurate identification of economically disadvantaged students. In addition, as Matt Chingos of the Urban Institute recently described, researchers can no longer rely on free and reduced-price lunch (FRPL) status as a proxy for poverty. An urgent priority for the state—it may require cross-agency cooperation—is to create a better way of indicating pupil disadvantage. A reliable marker of socioeconomic status is also critical for policy, as ESSA requires disaggregated test results.
- Include evaluation as a standard part of policy design on the front end. When designing a policy reform—whether at a state or local level—one question that should be asked is, “What is the plan for evaluating whether it’s working?” This might require the early engagement of researchers and the setting aside of funds. At the federal level, most programs come with such allocations; Ohio could do that for its big state-level reforms while also encouraging schools to set aside resources for local “R&D.” If evaluation becomes part of policy design on the front end, the benefits are two-fold. First, education leaders should get more timely results than if research were an afterthought, carried out much later in policy implementation (if at all). Second, turning evaluation into a standard practice could mitigate its political risk. Naturally, it is dicey to voluntarily order an evaluation, both for a given policy’s champions and its detractors. Advocates won’t want to see negative results, and no critic wants to see positive ones. But a transparent climate around research should lessen the risks of disseminating results.
Everyone can agree that Ohio needs and deserves a world-class school system that improves achievement for all students. The purpose of education reform is to get us closer to that goal. But the research from Ohio is maddeningly sparse on which changes are working for Buckeye schools and students. Moving forward, authorities at the state and local level must ensure that rigorous evaluation becomes the rock on which reform stands.