A recent study examines whether federal school improvement grants (SIGs) improved student outcomes in low-achieving schools, which as a condition of accepting the money had to use one of four school-improvement models: turnaround, transformation, closure, or restart. The program also recommended specific practices, such as comprehensive instructional reforms and changes to teacher and principal training. (It should also be noted that the Every Student Succeeds Act eliminated the SIG program, giving states more control over their turnaround efforts.)
The study compares 490 schools SIG and similarly-situated non-SIG schools across twenty-two states using a three analyses over a four-year period: a regressive analysis using 2010–11 and 2012–13 student test data; surveys of school administrators in 2011–12 and 2012–13; and a correlative study conducted in 2009–2010 and 2012–13.
The most important finding came from the regression analysis, and it’s that SIG dollars and tactics failed to improve math and reading scores, graduation rates, and college enrollment, when those schools are compared to similar non-SIG schools. This is in line with other recent studies on the same effects.
The results of the other two research methods are also, however, worth noting. The survey was designed to facilitate qualitative comparisons between SIG and non-SIG (but nevertheless low-achieving) schools. Researchers asked administrators about their schools’ improvement efforts in the areas of instruction, teacher and principal effectiveness, learning time, and operational flexibility. The results suggest grant-receiving schools used more of the SIG program’s recommended practices than low-performing schools that didn’t receive funds. This, if true, would of course pile on to the regression analysis’s woeful findings of SIG-program ineffectiveness.
The results of the correlative analysis were a little less glum. Researchers perused the SIG data, looking for trends and correlations (i.e., nothing statistically significant) in math- and reading-score changes between 2009–10 and 2012–13. On the one hand, there were some signs that the turnaround model—one of the four program-mandated school-turnaround strategies—may have boosted, however slightly, student math marks. On the other hand, graduation rates might’ve slightly decreased in SIG schools. So, on net, the news here isn’t great, either.
School improvement grants were, of course, well-intentioned, but at some point we ought to abandon policies that aren’t having the desired effect. Andy Smarick—who long ago foresaw this failure and recently asked whether the SIG program is “greatest failure in the history of the U.S. Department of Education”—thinks it’s past time for states to jump ship. But Morgan Polikoff last month cautioned against the impulse to prematurely judge policy effectiveness. Although “the recent impact evaluation [of the SIG program] was neutral,” he said, “several studies have found positive effects and many have found impacts that grow as the years progress, suggesting that longer-term evaluations may yet show effects.” The good news is that the Every Student Succeeds Act wisely allows states to make that determination for themselves.
SOURCE: Lisa Dragoset, et al., “School Improvement Grants: Implementation and Effectiveness,” Mathematica Policy Research (January 2017).