Ohio’s Gap Closing report card component reports how students in certain subgroups perform on state tests and their schools’ graduation rates compared to the collective performance of all students in the state. The subgroups include racial/ethnic groups, students with disabilities, and economically disadvantaged pupils. Gap Closing is one of six major report card components and makes up 15 percent of a school district’s rating in Ohio’s current summative grading formula, set to officially begin in 2017-18.
Currently, Gap Closing compares subgroup proficiency on state assessments and graduation rates to a set, statewide standard—also known as an Annual Measureable Objective (AMO). These objectives rise gradually over time, heightening expectations for subgroup performance. When a school’s subgroup meets the AMO, the school receives the full allotment of points (“full credit”). When the subgroup fails to meet the objective, the school receives no credit—unless it makes improvements relative to the prior year. In such cases, the state awards partial credit. Those points are tallied across subgroups and divided by the points possible to compute a component grade reported on an A-F scale. In certain circumstances, schools’ Gap Closing letter grade could be demoted (e.g., A drops to a B).
Without a doubt, Gap Closing is a complicated report card element—it assesses the performance of ten distinct subgroups based on several different measures. In fact, one of us has suggested scrapping it altogether and starting over. Meanwhile, the Ohio Department of Education’s (ODE) ESSA feedback process yielded suggestions to rework the component—it was panned for not providing enough credit for the progress of students falling short of proficiency—and many Ohioans deem Gap Closing to be the “least useful” report card measure. In response to the feedback—and to some new federal requirements—Ohio’s draft ESSA plan proposes some important changes to the component. Let’s take a look.
First, ODE would gauge subgroup achievement using the performance index instead of raw proficiency rates. Most readers are probably familiar with the performance index—it looks at achievement at multiple performance levels, such as proficient and advanced—as it has been used in overall school accountability for many years (just not to gauge subgroup achievement). Using the performance index instead of proficiency rates for subgroups is a good idea since it encourages schools to pay attention to students at all parts of the achievement spectrum, not just those at or around the proficiency bar.
Second, ODE plans to meet a new ESSA requirement—tracking the progress of English language learners (ELLs)—by creating a new indicator within the Gap Closing component. Instead of using Ohio’s general state assessments, this measure of ELL progress will use an alternative assessment: the Ohio English Language Proficiency Assessment (OELPA). The ELL portion will take into account English learners who attain proficiency on OELPA and those who make improvements but have not yet met the proficiency standard.
Third, Ohio is proposing to reduce its minimum “n-size” used for the Gap Closing component from thirty to fifteen students. For example, under the current n-size rules, a school with twenty students with disabilities would not be held accountable for their achievement as a separate subgroup. But it would under ODE’s ESSA plan. The upside of this proposal is that more schools will be held accountable for the separate performance of more subgroups, yielding greater transparency around results. The tradeoff is that it could force otherwise high-performing schools into one of Ohio’s intervention categories—a "focus" or "watch" school—based on the achievement of a smaller number of pupils than in previous years.
Two major concerns persist on Gap Closing.
First, ODE will continue to use annual changes in subgroup achievement as a measure of improvement. Quite frankly, they should stop doing this. This calculation doesn’t account for changes in the composition of a school’s subgroups from year to year. As a result, it is not necessarily correct to imply that a school with a higher performance index score for, say, black students than in previous years “closed the achievement gap.” It might’ve been due to an influx of students with stronger academic backgrounds.
Perhaps the simplest approach on Gap Closing is to just say it’s a status measure and call it a day. In other words, document subgroup achievement gaps (if they exist), but don’t try to evaluate whether a school is narrowing it from one year to the next. As Matt DiCarlo of the Shanker Institute writes, “[changes in achievement gaps] are poor gauges of school performance and shouldn’t be the basis for high-stakes rewards and punishments in any accountability system.” For more on the problems of “gap closing” in accountability settings, see DiCarlo’s great posts here and here.
Second, ODE should reconsider how it premises sanctions on Gap Closing grades. Its ESSA proposal says that schools earning a D or F on Gap Closing for two consecutive years will land in “school improvement” status. As our colleague Jamie Davies O’Leary discusses, an overwhelming number of schools and districts currently receive poor Gap Closing ratings. ODE should make sure that it is not going to sanction hundreds, if not thousands, of Ohio schools based on the results from a single report-card component. While they’re at it, policymakers should also reduce the weight on Gap Closing in Ohio’s summative grading system and instead put student growth closer to the center of accountability.
Ohio’s Gap Closing component puts the Buckeye State into compliance with several key federal requirements. While important to carry out, policy makers should consider a less complicated and less punitive approach to subgroup accountability in Ohio.