NOTE: The Thomas B. Fordham Institute occasionally publishes guest commentaries on its blogs. The views expressed by guest authors do not necessarily reflect those of Fordham.
The 2015 reauthorization of the federal Elementary and Secondary Education Act—known as the Every Student Succeeds Act (ESSA)—requires states to identify poorly performing public schools and help them improve. Importantly, ESSA grants states flexibility in fulfilling this requirement. That means Ohio has some decisions to make as it creates the state’s accountability plan due to the feds this fall. To inform this decision-making, the Ohio Department of Education commissioned Deven Carlson of the University of Oklahoma and me to use rigorous scientific methods to estimate the impact of recent efforts to turn around struggling schools in Ohio. I write to share the results of this study and to offer some general thoughts on how Ohio might proceed under ESSA.
Our study focused on two recent “school turnaround” initiatives: Ohio’s administration of the federal School Improvement Grant (SIG) program beginning in 2009 and its intervention in “priority schools” beginning in 2012. These programs targeted elementary and secondary schools ranked in the bottom 5 percent of eligible schools in terms of student proficiency rates in math and reading, as well as high schools with graduation rates below 60 percent. Importantly, both initiatives sought to produce rapid and lasting improvements in school quality by requiring significant changes to many aspects of schools’ educational delivery—particularly their leadership and staffing, as well as their use of data to drive instructional and managerial decision-making. These school turnaround efforts took three years to implement and entailed state-level monitoring and technical support.
There was one major difference between the programs, however: SIG was a competitive grant program for which districts applied, whereas the “priority school” program was mandatory and did not provide districts with financial support. Specifically, districts with SIG-eligible schools—those that fell below the performance threshold—could apply to receive up to $2 million in grants per school to support their implementation of one of SIG’s four models. The “priority school” program, on the other hand, required all non-SIG schools below the performance threshold (i.e., those that did not previously apply for and receive SIG funds) to implement a school improvement plan without financial support.
The effects of these interventions were clear. Ohio’s SIG efforts significantly improved student achievement in math and reading. Indeed, the analysis indicates that students in schools that received SIG awards experienced achievement gains of around 0.10-0.15 standard deviations annually, which is the equivalent of approximately 60 extra “days of learning” each year if one assumes a 180-day school year. According to one estimate, by 2014, the average test scores of a SIG school’s students were around 0.55 standard deviations higher than they would have been without the intervention. That is the equivalent of students moving from the 5th percentile to approximately the 14th percentile on the achievement distribution.
Yet Ohio’s “priority school” interventions had no such impacts on achievement. On average, the program’s requirements—such as staffing and leadership changes, data-driven decision-making procedures, increased community engagement, and the direction of Title I funds toward expanded learning time and professional development activities—did not have a positive impact on school quality as measured by student achievement gains on math and reading exams.
Why did student achievement improve in SIG schools but not in priority schools? We cannot say for sure. But the results of this study, as well as those of other rigorous evaluations, suggest some possibilities. In particular, research provides evidence consistent with the notion that managerial commitment and capacity, context, and disruption play important roles in the success of school turnaround efforts.
Managerial Commitment and Capacity
Turnaround initiatives are based on the notion that to improve the performance of a failing school, one must make fundamental changes across the entire organization—piecemeal efforts are inadequate for eradicating the entrenched perspectives and practices that lead to poor performance. In particular, the underlying theory suggests that fundamental change requires managerial commitment, resources, and managerial authority over key inputs, such as teachers.
The SIG program was more likely than the “priority school” model to meet these conditions. Districts applied for the program and selected from one of four improvement models, and ODE awarded grants based on a demonstrated commitment to implementing one of them. Additionally, the SIG program allocated up to $2 million per school, which ended up being over $2,000 per pupil for each of three years. The “priority school” effort, on the other hand, came with no additional dollars. Both the SIG and “priority school” programs featured technical assistance in the development and implementation of improvement plans, and both emphasized and facilitated school reconstitution through the replacement of teachers and principals. But SIG concentrated significant resources on a smaller set of schools in districts that were likely more committed to turnaround efforts.
The problems plaguing poorly performing schools—and the available solutions to those problems—vary significantly across districts. Imposing uniform interventions across schools is unlikely to work. Consider school reconstitution. In the spirit of spurring fundamental and lasting organizational change, the SIG and “priority school” initiatives have emphasized the replacement of teachers and principals. This focus is understandable, as research convincingly demonstrates the importance of both (particularly teachers) in determining student achievement and longer-term outcomes. But the assumption underlying this strategy is that superior teachers and principals are available. In hard-to-staff rural and urban districts educating impoverished students—precisely those likely to be in the bottom 5 percent in terms of achievement-based metrics—school reconstitution could actually lead to inferior teachers and principals.
The SIG program featured voluntary participation and choice over which model to implement. Those districts that applied for and received SIG grants likely perceived the models they chose as viable in their particular contexts. They also were apt to view participation as worth the administrative burden that comes with state oversight. The “priority school” program was not voluntary—it was mandatory—and offered no such choice over intervention model, thus lowering the likelihood of a match between the improvement strategy and school context.
Disruption – For the Better
School turnarounds require fundamental organizational change. Such change is disruptive by design. On the one hand, research has shown that turnover among teachers and principals can have a negative impact on student achievement. On the other hand, as noted above, replacing a school’s teachers or principal may be worth it if their replacements are of sufficiently greater quality. In other words, organizational disruption is costly, so policy makers must make sure that the benefits of disruption are worth the costs.
Research that finds that turnaround strategies are successful also often finds that the more disruptive turnaround models were more likely to yield improvements in student learning (e.g., see this recent study). Our study also provides some suggestive evidence to this effect. For example, we found that the SIG Turnaround model was more disruptive than the SIG Transformation model, as it led to far more principal and teacher turnover. Yet there was also some evidence that the SIG Turnaround model led to somewhat greater improvements in student achievement. Thus, it seems that dramatic, fundamental changes may have been more efficacious than incremental changes for this particular set of schools. It’s also important to keep in mind, however, that the schools for which SIG Turnaround made sense were likely to be those that implemented it.
Our study enables us to say with relative confidence that, on average, Ohio schools benefited from SIG turnaround efforts but not from the first wave of “priority school” identification. It does not allow us to explain why that occurred with nearly that level of confidence. Nevertheless, in the context of a larger body of research, our results are consistent with some broad lessons. First, school improvement programs should ensure that school managers have the commitment and capacity (including money) to undertake reforms. Second, programs should be sufficiently flexible to accommodate the highly variable circumstances that districts face. Third, program designers should consider carefully both the risks and benefits of organizational disruption.
To the extent possible, Ohio might focus its resources toward a few of the lowest-performing schools whose districts demonstrate a commitment to fundamental change, and perhaps not waste state resources and impose administrative burdens on schools that see little value in implementing turnarounds. That strategy would allow districts—the locus of managerial authority and capacity at the local level—to implement significant reforms in schools that are most likely to change. For persistently low-performing schools that prove incapable of implementing meaningful changes, closure may be the only recourse for districts. As some studies from Ohio and elsewhere have indicated, shutting low-performing schools can have academic benefits if students transfer to better schools. (Also see this study for particularly rigorous evidence of closure’s potential effectiveness.)
Ohio policymakers also should keep in mind that one purpose of accountability systems is to capitalize on the expertise of local managers—to provide them with the authority and capacity to take actions appropriate given their local context. Although micromanagement is always tempting—and there is evidence that specific interventions such as data-driven decision-making and extended instructional time can be very effective—the powerful logic of accountability systems is that schools should be held accountable for outcomes, as opposed to the means by which they generate those outcomes.
Stéphane Lavertu is an associate professor in the John Glenn College of Public Affairs at The Ohio State University. The opinions and recommendations presented in this editorial are those of the author and do not necessarily represent policy positions or views of either the John Glenn College of Public Affairs or the Ohio State University.
 Districts could close a school, “restart” a school under independent management, or implement turnaround models (called SIG Turnaround and SIG Transformation). Nearly all districts chose to implement the turnaround models.
 The study focused on student achievement in math and reading and graduation rates because those were the target areas of these programs. We emphasize the achievement results here because they were a more central part of our study.
 The impact on graduation rates was also positive, but we examined just a subset of high schools that received the grants.
 There appeared to be some modest positive effects on graduation rates, but we examined just a subset of high schools.