Earlier this month, Bellwether Education Partners and the Collaborative for Student Success released a report assessing states’ ESSA plans. As The 74 reported, their reviews found them “largely lackluster,” a judgment that, at first blush, seems to conflict with Fordham’s own generally positive review of all fifty-one ESSA accountability plans. But don’t rely on first blushes.
The key word in the preceding paragraph is “accountability,” which distinguishes our report from theirs and mostly explains why ours was more positive. Although both reports looked at accountability, Fordham’s looked only at accountability—and only at select aspects of it—and we had good reasons for restricting our analysis in this way.
Both projects assessed “Consolidated State Plans” that states sent to the U.S. Department of Education as part of their obligations under the Every Student Succeeds Act. These submissions were typically more than one hundred pages long, and each set forth its state’s intentions in myriad areas, including assessments, accountability, long-term goals, school turnarounds, instructional support, teacher equity, programs for at-risk students, rural education, and much more.
One problem with reviewing everything in these plans—and a reason, we suspect, why neither report did—is that they’re basically big, complex compliance exercises. They comprise lots of blather and paperwork that culminate in pretty words across many realms, words that often don’t amount to much. The trick, then, is to pluck, analyze, and evaluate the parts that do matter.
The Bellwether and Collaborative authors, smart folks we know and respect, identified nine such parts: goals; standards and assessments; indicators; academic progress; all students; identifying schools; supporting schools; exiting improvement status; and continuing improvement.
Fordham, however, went at this differently. In our view, the part of ESSA plans that will matter most is the design of state accountability systems—in particular the ratings or labels that states place on their individual schools, the components and weightings that go into those ratings, and the methods used to develop them. We based this on rigorous and well-respected studies from the NCLB era showing that such ratings can and do drive behavior in schools.
We therefore gauged whether state accountability plans achieved three objectives:
- Did they assign annual ratings to school that are clear and intuitive for parents, educators, and the public?
- Did they encourage schools to focus on all of their pupils, not just their low performers, by measuring achievement via average scale scores or a performance index, and by giving substantial weight to a measure of annual academic growth for all students?
- Did they fairly measure and judge all schools, including those with high rates of poverty, by basing ratings on how much students learn while in their classrooms, not on pupils’ performance level on the first day of class?
In these crucial but limited areas, the Fordham analysis actually has much in common with the Bellwether/Collaborative report. We found that just nine states lacked clear and intuitive annual ratings; they pegged that number at fifteen. We applauded states’ widespread use of growth measures and metrics that look beyond proficiency rates; and they noted that a bright spot in state ESSA plans was their widespread inclusion of “year-to-year student growth, which gives schools credit for how much progress their students make over time, rather than static determinations about where students are at a given point in time.”
Yet Fordham’s selective approach meant that we ignored much of what Bellwether and the Collaborative assessed. We did not, for example, look at long-term state goals, most of which we fear represent lofty, unrealistic, and functionally meaningless promises.
Neither did we assess consequences and interventions for chronically failing schools. Improving the educational outcomes and opportunities of students attending such schools is indeed important, but research to date offers little hope of success by intervening in the ways that ESSA seems to intend and that states are proposing. There are, however, a few promising strategies, and we’ve come to believe that the best fix may be to let chronically failing schools die, mostly by giving their students better school options, especially new high quality charters. Still, we’re not surprised that even choice-friendly states opted not to put that politically dicey approach explicitly in their ESSA plans.
Moreover, chronically failing schools are just a sliver of the total, and Fordham chose to focus on components of state plans that affect all schools. That's where annual summative ratings like A–F grades come in. For the 80–90 percent of schools that will never receive a failing grade, we think transparency can do a lot of good.
We also skipped state’s plans for subgroup performance. We concede that it's possible that some states to which we gave high marks may end up labeling schools as great even if they do poorly by one or another subgroup. But we doubt that this will happen very often because those would have to be schools where poor and minority kids make up a small percentage of the total—or else their poor performance would drag down the entire school grade—and, given patterns of racial and socioeconomic isolation, there aren't that many such schools. Furthermore, we assess whether states emphasize a measure of growth for all students, and doing so means, well, focusing on all students, including those in subgroups.
In the end, the Fordham analysis is based on a trio of objectives that we believe states ought to take seriously when designing their school accountability systems. We’re clear about those objectives and why they’re important. But we don’t claim that there’s any one best way to construct these frameworks. And there are certainly tradeoffs. Focusing on all students, for example, could mean that schools and teachers will pay less attention to their low performers. We understand that risk, but there’s also a great risk to the country’s future when we neglect the education of higher-achieving students, especially those growing up in poverty. This is a normative value, and we don’t assume that everyone will share it.
None of this is easy or argument-free, but we stand by our methodology as well as the conclusion that a surprisingly large number of state accountability plans are good news for students. And when it comes to school ratings, student growth, and looking beyond proficiency rates—the basis of our positive outlook—we think Bellwether and the Collaborative actually agree.