Please note the update posted on May 15th at the bottom of this blog post.
Tucked into a raft of House amendments to the governor’s budget legislation are changes to state law that would have serious ramifications for school accountability and transparency. The House proposals, never debated in the chamber’s education committee, would drastically alter the way Ohio produces overall school ratings.
Under current law, those ratings are calculated using a weighting system that incorporates multiple dimensions of academic performance. The House’s amendment, however, would throw out this more holistic system and instead use just one measure—the higher of either the performance index or value-added progress rating—as the overall grade.
It’s critical to get overall school ratings right, as they aim to offer the public clear, prominent signals about the performance of districts and schools. They are also used to determine when the state has an obligation to intervene in chronically low-performing schools for the sake of the students attending them. Though legislators could and should refine the current approach to assigning overall ratings (here’s our suggestions), the House proposal uses a machete when a scalpel is in order. It’s a bad idea. Here’s why.
The House proposal creates a distorted view of overall school quality
Most of us would agree that a student with a B in math but an F in English shouldn’t be considered fully on-track. Why? Students need a strong foundation in both subjects, not one or the other. In like manner, we need to consider both measures of student achievement and growth when viewing overall school quality. Recall that achievement—as captured by the performance index—considers how students fare at a single point in time, providing information about whether pupils are meeting state academic standards (answering the question, “Are kids on track?”). Growth, or “progress,” as measured by value added, completes the picture by gauging how student performance changes over time (answering the question, “Are kids catching up?”).
To their credit, Ohio policymakers have long understood that achievement and growth matter, and that both should be incorporated into the overall rating. But under the amendment, a one-sided view of academic performance would emerge. This has real-world implications. Consider an example: A few years ago, Dayton Public Schools received an “A” value-added rating, its only top mark on the measure since the state began assigning letter grades. Under the House proposal, the district would’ve received an overall “A” for the year—a rating that would put it in elite company. But such a rating sweeps under the rug the persistent achievement struggles in Dayton where only one in four students meet state academic standards, and even fewer go on to complete college. While district officials might celebrate, families and community members would likely miss important information about pupil achievement.
Though this is one example, there are dozens, maybe hundreds, of cases at a school-level where the proposed system will give satisfactory-to-stellar marks to schools where tragically few students read, write, and do math proficiently. (It may also conceal schools where high-achieving students are aren’t progressing.) Ohio families and taxpayers deserve a clear sense of whether students are achieving at high levels and making solid growth from one year to the next. Unfortunately, awarding schools the higher of the two ratings covers up potential weaknesses and creates distorted views of school quality.
It unnecessarily softens school accountability
It’s no secret that accountability for student outcomes is under fire, mainly from the adults working in the systems being held to account. The House proposal is yet another push for softer accountability. Because it calls for the use of the higher of the performance index or value-added measure—instead of combining them—districts and schools would receive rosier overall ratings. Consider the distribution of districts’ actual overall ratings from 2017–18 (solid bars) and projected overall ratings under the House plan (striped bars). Predictably, the House’s approach inflates overall grades, with an additional 103 districts receiving A’s and an additional thirty-five districts receiving B’s. Conversely, fewer districts receive the less superlative grades. This is not to say that there is a “right” or “wrong” distribution. But it does suggest that this attempt to upend school ratings may be less about improving accountability structures and more about using state law to produce a cheerier picture of school performance.
Figure 1: Distribution of district overall ratings under the current method and House proposal
It gives low-performing schools a free pass
Since overall ratings are linked to consequences for poor performance, inflated ratings would allow some low-performing district schools to escape consequences. For instance, just a single year of an “A” rating on value-added—or any rating above an “F”—would enable otherwise deeply troubled schools to avoid school improvement efforts under the House’s proposal that ditches district-level interventions via academic distress commissions and focuses on improvement at the school level. This is apt to occur, especially under another unwise House proposal wherein the state would revert to using a one-year, instead of multi-year, value-added score that is more susceptible to “swings” between letter grades. (Recall the example above of Dayton Public Schools, which received an “A” almost by chance in a year when a one-year score was used.) Moreover, the juiced school ratings might also allow poor performing charter schools to avoid closure. Finally, in a truly troubling move, it appears that the House legislation would extend yet another “safe harbor” period in which schools are shielded from sanctions associated with poor results. The bill prohibits the use of school ratings to determine consequences in a year when any change, no matter how minor, is made to the rating system, and it also calls for a “reset” in the timelines that determine sanctions (e.g., the three-year timetable for determining automatic closure of low-performing charter schools).
Fortunately, there was one silver lining in the wreckage: The House, realizing that inflating school ratings would have a detrimental effect on school choice programs connected to the accountability system, including eligibility for EdChoice scholarships, passed a last-minute amendment ensuring their proposed changes don’t affect students’ choice options. Nevertheless, this move hardly redeems the larger gutting of state accountability laws.
* * *
When it comes to the House’s proposals around report cards and school accountability, lawmakers need to pause and remember their purpose: To give educators, policymakers, and most importantly parents a clear, honest accounting of how schools perform on critical gauges of student achievement and growth. Though there is room for improvement in the state’s report card system, an either-or approach to the overall rating is not the right way forward. Here’s hoping that as the bill moves through the Senate, legislators will think twice about this provision.
Update May 15, 2019: An alternate interpretation of the House overall ratings proposal has surfaced since the drafting of this piece. This blog post above was written based on a Legislative Service Commission (LSC) bill analysis indicating that the overall rating would be based solely on the higher of either the performance index or value-added progress rating (and thus excluding the other report card components such as Gap Closing or Graduation). The Columbus Dispatch also ran an article based on this reading of the legislation. However, without any change in the legislative language, LSC later amended its analysis (after this blog post was written) to indicate that the other report card components would be included in the computation of overall ratings. The actual language in the House-passed Amended Substitute House Bill 166 (see lines 23632-23654) is ambiguous, perhaps resulting in the confusion.
The arguments and conclusion of the foregoing piece are not substantively different under the revised LSC interpretation, though the projected distribution of districts’ overall ratings shown in Figure 1 below would be different. (Under the revised LSC interpretation, the State Board of Education would probably need to redo the rating formula.) Either reading of the legislation, however, is likely to yield systematically higher district and school ratings.
- Aaron Churchill