The American Recovery and Reinvestment Act of 2009 marked a massive federal investment in our schools, with more than $100 billion to shore up school systems in the face of the Great Recession. Along with that largesse came two grant programs meant to encourage reform with all of those resources: Race to the Top and School Improvement Grants (SIGs). While Race to the Top aimed to spur system innovation, SIG was intended to facilitate turnarounds of the nation’s lowest performing public schools. Congress appropriated $3.5 billion for the first cohort of SIGs and went on to authorize five subsequent cohorts, bringing the total investment in SIG to approximately $7 billion.
There is plenty of research that examines whether this hefty price tag was worth it. Most of it is mixed, and some of the findings are downright disappointing. Back in 2017, Andy Smarickan on SIG effects a “devastating” blow to Obama’s education legacy and noted that the report “delivered a crushing verdict: The program failed and failed badly.” On the other hand, studies that have focused on state or local results—such as on SIG in Ohio—have been more promising.
A new working paper from Annenberg Institute at Brown University seeks to add to the discussion by offering the first comprehensive study of the longitudinal effects of SIG on school performance. The paper estimates SIG effects on student achievement and graduation rates for the first two program cohorts in four geographically diverse locations: two states, North Carolina and Washington, and two urban districts, San Francisco Unified School District and the pseudonymous “Beachfront County” Public Schools (the authors are still waiting for permission to use this district’s name). Sixty-six schools were awarded funding in the first cohort during the 2010–11 school year; and thirty-three schools awarded funding in the second cohort the following year. The data span from the 2007–08 school year through 2016–17 in order to include three years before the first cohort and three years after funding ceased. The researchers used state and district administrative datasets on student characteristics, state tests in both math and English language arts, graduation rates, and school contexts. They also controlled for changes in students’ demographic characteristics.
Results show gradually increasing positive effects during the intervention years in both math and ELA in grades three through eight. Effects were larger in the second and third year of the program than they were in the first. After SIG funding ended, positive effects began to decrease slightly, but were sustained in math through the third or fourth year post-policy (the sixth or seventh year after the school initially received the grant). Perhaps most significantly, effects on graduation were also positive: Four-year graduation rates steadily increased throughout the six or seven year period after the start of SIG interventions. Effects on students of color and low-income students were similar to overall effects and were sometimes slightly larger. Results across the four geographic locations were generally consistent but had differing magnitudes, a finding that’s in line with previous research indicating that variations could be a result of local design and implementation decisions.
The authors note that these findings suggest that SIG interventions could be one of the federal government’s most successful capacity-building investments for improving schools’ low performance. In fact, the researchers note that SIG effects on test scores in this analysis are “similar to the effects on student test scores estimated for the market-based reforms in New Orleans after the Hurricane Katrina in 2005.” But the fact remains that other reputable and wide-ranging studies found far less positive results. As, it appears the best question regarding SIG effectiveness is “not ‘did SIG work?’ but rather ‘why did it produce results in some places and not others?’”
SOURCE: Min Sun, Alec Kennedy, Susanna Loeb, “,” Annenberg Institute at Brown University (January 2020).