School report cards, the primary mechanism through which Ohio maintains transparency and accountability for academic outcomes, have been a hotly debated topic. Critics argue that the ratings track too closely with pupil demographics, some decry the shift to the more transparent and easily understood A–F rating system, while still others are just unhappy with the results. As an annual checkup on schools’ health, we at Fordham strongly support robust report cards, though we too have offered suggestions for refinement.
Under pressure from traditional education groups, legislators recently mandated the creation of a ten-member committee tasked with reviewing the report card and making recommendations for its improvement. The legislation specifies that three members must be district superintendents appointed by their statewide association—a group that has been critical of the current report card—so it’ll be important that the Senate and House appointees bring other perspectives to the table, such as those of parents, employers, charter leaders, and higher education. Meetings are slated for the fall and the committee must submit a report by the rather ambitious deadline of December 15, 2019.
Our hope is that the committee will recognize and support the strengths of the existing framework. These include the report card’s emphasis on objective measures of student achievement and growth, the use of a transparent rating system (A–F is surely the most intuitive), and the implementation of a user-friendly overall rating. But there still remains room for improvement, and the budget bill directs the committee to investigate several of the most contentious report card-related matters. The following discusses three of the key questions singled out in the legislation and how I believe the committee should proceed.
Question 1: How many years of data should be included in the Progress component? An essential piece of the report card, the Progress component uses the state’s “value-added” measure to gauge student growth over time. Because value added controls for students’ prior achievement, the results don’t correlate closely with demographics, thus offering an important look at school performance apart from pupil backgrounds.
It may sound wonky, but significant debate has emerged around whether Ohio should use a single-year value-added score or one averaged over multiple years. Currently, the state relies on a three-year average. For instance, a school whose value-added scores are 5.0, -1.5, and 0.5 over the past three years would have an average score of 1.3, and this score would be used to determine the school’s value-added rating. The upside of a three-year approach is that, akin to a moving average that smooths stock prices to make larger trends visible, it helps to iron out fluctuations that arise in yearly results, which have in the past led some to question the measure. The tradeoff, however, is that older data may not reflect current school performance. This is particularly problematic for schools that are undergoing turnarounds and are starting to show stronger results but continue to be dragged down by previous scores.
Proposed solution: Ohio should maintain a multi-year average but modify the calculation by implementing a weighted average that places more emphasis on the current-year score than prior years. This would help to guard against large “swings” in school ratings from year to year—e.g., going from an “A” to an “F”—but also ensure that the most recent performance is more heavily reflected in the results. A straightforward way to do this would be to weigh the most current year at 50 percent and each of the two prior years at 25 percent each.
Question 2: How should grades be assigned for the Progress component? To generate the component’s ratings, Ohio relies on value-added “index scores,” measures of statistical certainty that indicate whether student growth is more or less than what was expected for the year. Ohio wisely translates esoteric value-added index scores into more intelligible school ratings. But the grading scale that determines these ratings has stirred controversy, likely due to the near pass-fail distribution of ratings. In 2017–18, 76 percent of schools received either an A or F on the overall value-added measure, a percentage that doesn’t appear to reflect the wide range of underlying scores. Seemingly dissatisfied with the ratings distribution, legislators recently enacted substantial changes to the grading scale in the state budget (HB 166). As shown in Table 1, the new grading scale—expected to come into effect in the fall 2020 report-card release—lowers the performance standards for each rating, which will in turn inflate value-added ratings across the state.
Table 1: Grading scale used to determine value-added ratings
* This grading scale is likely to be used for the 2018–19 school report cards as HB 166 goes into effect on October 17, 2019 (after the release of the report cards). The scale displays the value-added index scores (the value-added gain or loss divided by the standard error) associated with each rating.
Proposed solution: Value added remains a rigorous and valid measure of pupil academic growth and it’s one of the few measures that doesn’t correlate closely with student demographics. Legislators would be smart not to throw the baby out with the bathwater. However, given the continuing frustration with present reporting and grading systems, Ohio should transition from its use of value-added index scores (recall, they are measures of statistical certainty) and instead focus on the amount of academic growth occurring in a school—data that are available but not currently used for report-card purposes.
Such a shift, also suggested recently by two Ohio State University researchers on this blog, would fundamentally change the question that report cards seek to answer. Instead of asking, how sure are we that students made statistically significant gains/losses, the Progress component would instead ask how much achievement growth is the average student making? Educators, parents, and taxpayers are more apt to care about how much growth happens in a school than questions about statistical certainty. This approach could also support a more defensible rating system: It’s harder to argue with a “D” or “F” if data indicate that the average student slid from the 30th to 20th percentile. It should also allow us to better identify extraordinary high-poverty schools that are helping students make up large chunks of ground (not just eking out a barely positive but “statistically significant” gain).
Question 3: How should the Prepared for Success component be designed—and should additional indicators be added? First appearing as a graded component in 2015–16, Prepared for Success is a relatively new feature of Ohio’s school report cards. It has a two-tiered structure: On a “primary” level, when students earn remediation-free scores on the ACT or SAT, an honors diploma, or industry credentials, their school receives credit (i.e., one point). Above that is a “bonus” structure whereby schools receive extra points when students pass an AP or IB exam, or earn college credit through dual enrollment. Although a career-oriented indicator exists—industry-recognized credentials—concerns have been raised that the component focuses too heavily on college-ready metrics. Suggestions for additional indicators have included military enlistment, job readiness “seals,” or apprenticeships after graduation. Debate has also revolved around the possible removal of the bonus structure, and instead placing the AP/IB and dual-enrollment credits on the same “tier” as remediation-free scores, etc.
Three key considerations should be kept in mind as they examine the Prepared for Success dimension. First, policymakers should be careful not to cram too much data into the system and thus create an immense, complicated component. Second, for formal report-card purposes, they should depend on reliable data and deploy metrics that are difficult to “game.” It’s not clear whether the state yet collects sound data on military enlistment or apprenticeships[1] and policymakers should be wary of the subjective job-readiness seal as an accountability measure. Third, policymakers should be mindful that post-secondary outcomes—e.g., college enrollment or apprenticeships after high school—are not necessarily in the control of K–12 schools. That is likely why Ohio reports college enrollment and completion rates within this component but it refrains from rating schools based on those data.
Proposed solution: Provided that Ohio collects reliable data, the state should add military readiness and enlistment as a “primary” indicator, especially now that it’s an approved graduation pathway. As for the component structure, shifting to a single tier whereby schools earn one point when students meet any of the seven indicators of readiness—the current six plus military readiness—would produce more interpretable results (e.g., 75 percent of students meet a college-and-career ready target). That being said, the approach removes incentives for schools to encourage their highest-achieving students—perhaps those that earn remediation-free scores earlier in high school—to meet even higher goals such as passing AP or IB exams. While a close call, my own view is that Ohio should retain the bonus structure as-is for the purposes of assigning ratings but it should also begin to report the percentage of students (unduplicated) who meet any of the primary indicators to create a clearer picture of readiness in each district and school.
* * *
As an annual check on the performance of schools, robust and transparent report cards remain critical to a healthy K–12 education system. The current iteration of Ohio’s report cards, which has won praise from national education groups, has important strengths for which the committee should voice support. But it’s always worth examining ways to improve the system, and with some careful adjustments, legislators could create a report card that is more usable for Ohio families and communities and fairer for educators.