Back down here on the ground, we’ve got more information on the ongoing transportation troubles in the Dayton area. Four charter schools and one private school have submitted complaints to the state asking that Dayton City Schools be investigated for non-compliance with state transportation laws. The fines for non-compliance reported thus far could total $750,000. Said a private school official in one complaint: “I am not sure they know what a transportation plan is.” Yowza. The district has, of course, lawyered up over the fine amount whole situation. (Dayton Daily News, 1/9/23) No school choice families are quoted in this piece, but transportation for district students in Cincinnati seems to be pretty messed up as well. A subhead to the story indicates it’s lack of actual buses (not drivers) that is to blame here. But we cannot rule out the fact that the district chose to include all 7th and 8th graders in their yellow bus transportation plans this year for the first time. It does not sound like any solution will be forthcoming in the short term. (WCPO-TV, Cincinnati, 1/10/22)
For two decades, Ohio’s school report card has shined a light on student outcomes in the state’s 3,000 plus public schools and 600 plus districts. It offers key academic results on state assessments and other markers of educational success, allowing parents and community members to gauge the quality of local schools. The report card framework, however, has evolved in response to changes in federal education policy and—more importantly—based on feedback from Ohioans. In 2021, state lawmakers passed reforms that fine-tuned the state’s report card model. What are those changes and how were they implemented in fall 2022? This publication dives into Ohio’s revamped report card and examines results from the first year of implementation. Download the full report or read it below.
Executive summary
For two decades, Ohio’s district and school report cards have been the linchpin to a transparent and accountable public school system. Report cards provide key information about how students perform on the year’s state assessments and how they are growing academically over time. In more recent years, Ohio has added measures of high schoolers’ postsecondary readiness as well as elementary school students’ progress in reading. These data—along with user-friendly ratings based on them—assist parents who are making school decisions for their children, and they provide communities with annual checkups on the academic quality of their local schools. In some circumstances, state policymakers rely on report cards to identify low-performing schools that need intervention and additional supports.
Given these critical purposes, Ohio needs a report card that provides a clear and accurate picture of school performance across the state. We at Fordham have long been staunch supporters of the goals and aims of Ohio’s report card, but we’ve also shared concerns about how its prior version was functioning. In a 2017 paper, we outlined how the system had gotten bogged down with too many measures and ratings, didn’t adequately differentiate school performance, and placed excessive emphasis on “status” measures that tend to correlate with demographics.[1] Others, largely representing school administration groups, levelled harsher criticisms and some offered proposals that would have undermined the overarching goals of the report card.
After several years of debate, Ohio lawmakers overhauled the school report card in 2021. The reform legislation—House Bill 82 of the 134th General Assembly—won near-unanimous approval in both chambers, and Governor DeWine signed it into law.[2] We, along with other education groups, supported the legislative action. But it’s no secret that strong reforms on paper can go awry during implementation. This paper takes a closer look at Ohio’s redesigned report card in its first year of implementation (the 2021–22 school year). What are the key changes, and how were they put into practice?
In brief, the main revisions include the following.
First, it shifts from A–F school ratings to a five-star system. Although letter grades remain the most widely understood grading system, school officials criticized them and pushed for their removal. Feeling pressured, lawmakers weighed several alternatives and ultimately decided to go with star ratings, a system that continues to offer a clear sense of school performance, perhaps without being as politically charged. In 2021–22, Ohio assigned star ratings to schools on five components—Achievement, Progress, Graduation, Gap Closing, and Early Literacy—with a sixth component rating (based on postsecondary readiness) tentatively slated for 2024–25.
Second, the reforms wisely preserve and refine schools’ Overall rating, Ohio’s longstanding “bottom line” evaluation that combines schools’ performance on the various components. This mark continues to offer parents and the public a user-friendly summary of school quality, while, in a policy improvement, placing heavier weight on the Achievement and Progress ratings—the two central components of the state report card. Overall ratings were withheld in 2021–22 but will appear next year.
Third, the legislation makes dozens of technical adjustments to streamline the system and strengthen its various components. The notable revisions include (1) removing the duplicative “indicators-met” dimension of the Achievement component, thus yielding a clearer focus on the key performance-index (PI) measure; (2) adding a value-added “effect-size” growth measure in the Progress component that allows us to better pinpoint highly effective or ineffective schools; and (3) overhauling the Gap Closing calculations to ensure that schools are held accountable for both the achievement and academic growth of designated student groups (e.g., economically disadvantaged). Changes such as these are discussed further in the report.
This analysis also uncovers one issue that still needs more work: the insufficient rigor of the Gap Closing component. Last year, more than 60 percent of districts and schools received four- or five-star ratings on this measure, despite significant learning losses and widening achievement gaps coming out of the pandemic.[3] Such rosy results can be explained by a couple of decisions made by the state board of education and the department of education during implementation. First, while setting the grading scale was not easy given the significant changes to the component, the scale—in hindsight—ended up being too soft. Schools could meet less than half—just 45 percent—of the performance indicators and still receive four stars. Additionally, the achievement targets for subgroups were set too low. For instance, 75 percent of schools met the state’s PI goals for economically disadvantaged pupils, even as those very students suffered large learning losses. Moving forward, policymakers should increase the rigor of this component and make certain that it offers an honest picture of how effectively schools are educating all student groups.
To its credit, Ohio is moving to a report card that offers transparent ratings to parents and the public, treats schools more evenhandedly, and has a stronger technical foundation. It’s one that state leaders should be proud of and confidently stand behind. With some smart tweaks as the new framework is implemented, Ohio will finally have a report card that is built to last.
Analysis of Ohio’s revamped school report card
After several years of debate, Ohio lawmakers overhauled the state’s school report card in 2021 via House Bill 82 of the 134th General Assembly. The legislation preserves several core strengths of the previous system, including (1) upholding Ohio’s longstanding commitment to using objective measures of performance on state exams; (2) maintaining the PI and value-added measures as the key indicators of pupil achievement and growth, respectively; and (3) preserving the state’s Overall rating, which combines results across multiple report-card components to generate a user-friendly summary for parents and the public.
Yet it’s also different. The most visible shift is replacing A–F letter grades for districts and individual schools with a five-star rating system. Appearing on the 2012–13 to 2018–19 report cards, letter grades moved Ohio to a more widely understood rating system than the state’s prior approach, which had relied on ambiguous labels such as “continuous improvement.” School officials, however, bitterly criticized letter grades and pressed to scrap ratings altogether or reinstate vague labels. Others (including Fordham) raised concerns that such opaque approaches would hide the ball. In the end, state lawmakers reached a reasonable compromise. The new five-star system continues to offer a transparent picture of school quality, one that parents and the public can easily grasp, while taking some of the sting out of the ratings.
The reform legislation also made numerous technical changes that streamline and improve the system. These refinements were undertaken in response to concerns from Fordham and other education groups that the report card wasn’t functioning properly. Fortunately, legislators were willing to roll up their sleeves and make the fixes needed to strengthen the report card. Table 1 reviews the most important changes, with more detailed analysis of five key components in the pages that follow. Discussion about the Overall rating, including tweaks made to the methodology for calculating it, appears here.
Table 1: Summary of Ohio’s report-card components and recent changes to them
Component 1: Achievement
Student achievement on state assessments has long formed the backbone of Ohio’s report-card system—rightly so, as measures of achievement offer Ohioans a clear sense of where students in their local districts and schools currently stand academically. They shed light on the basic question of whether students are struggling in core academic subjects or exceeding state standards.
For many years, Ohio has deployed two measures—proficiency rates and PI scores—to present a picture of student achievement. As noted in the table above, the recent reforms eliminated the use of proficiency rates via “indicators met” in the rating system (though these rates are still reported). Instead, Ohio now relies entirely on the PI to determine Achievement ratings. The two measures are highly correlated—schools with high PI score tend to have high proficiency rates (and vice versa)—so removing one of them streamlined the rating system. In contrast to the more simplistic proficiency rate, the PI uses weights that provide more credit to schools when students achieve at higher levels. In an accountability setting, this structure encourages schools to pay attention to all students—including high and low achievers—rather than incentivizing a narrow focus on students around the proficiency bar. The table shows the calculations using 2021–22 data from Columbus City Schools, the largest district in Ohio.
Though the index’s weights have remained consistent over time, the recent legislation slightly alters the way scores translate into ratings. Previously, schools’ PI ratings were based on their score divided by the maximum number of points possible (120), not considering “Advanced Plus.” Now they are determined by dividing scores by the average of the top 2 percent of districts or schools statewide (107.3 for districts and 109.1 for schools in 2021–22).[5] This “curve,” which was championed by school administrator groups, slightly boosts Achievement ratings and explains why—despite the decline in scores statewide[6]—more schools received high marks on the Achievement component in 2021–22 compared to 2018–19.
Figure 1: Distribution of Achievement ratings in Ohio schools, 2018–19 and 2021–22
While achievement measures provide insight into where students stand, they disadvantage high-poverty schools. That reality reflects persistent achievement gaps that are partly driven by socioeconomic factors. The PI scores and Achievement ratings for 2021–22 continue to follow this pattern. Figure 2 shows that schools with higher percentages of economically disadvantaged students tend to have lower PI scores. Table 4 reports that most high-poverty schools receive one- or two-star Achievement ratings (86 percent), while just 5 percent of low-poverty schools receive such marks.
Figure 2: PI scores versus economically disadvantaged students in Ohio schools, 2021–22
Table 3: Achievement ratings by poverty tier in Ohio schools, 2021–22
Component 2: Progress
Recognizing the limitations of evaluating school performance solely based on achievement metrics, analysts have developed student growth measures as a way to create a more even playing field for schools serving children of differing backgrounds. These measures rely on statistical techniques that gauge a school’s contribution to changes in student achievement over time. Because these methodologies control for a student’s prior achievement, schools of all poverty levels have more equal opportunities to demonstrate academic growth. To offer a simplified illustration of this type of method, consider a high-poverty school whose average student starts the year at the twentieth percentile. At the end of the year, this student scores at the twenty-fifth percentile. That five-percentile gain is recognized under a growth model, even though this student still hasn’t reached proficiency.
For more than a decade, Ohio has used a “value-added” growth model.[7] Under Ohio’s former report-card system, the state relied on the value-added index score to determine Progress ratings. While that score indicates whether students’ gains or losses are statistically significant, it doesn’t offer a clear sense of their magnitude. This was less than ideal. A large district, for instance, could eke out a miniscule gain of one percentile yet receive a top rating because of the strong statistical evidence (due to its large numbers of students) that the gain was different from zero. To gauge the size of the impact more appropriately, Ohio began implementation of a value-added effect size in 2021–22, which is now paired with the traditional index score to determine Progress ratings. Taken together, the two measures now offer a better depiction of whether a school’s impact on pupil growth is both statistically and practically significant.
The following table displays how Ohio combines the two measures to determine Progress ratings for individual schools. It’s a two-step process in which the state first considers the index score—using a largely similar framework as before[8]—and then applies the effect size to differentiate four- versus five-star schools and one- versus two-star schools.
Table 4: Progress rating framework for Ohio schools
The Progress ratings confirm that the new framework better pinpoints high and low performers. Under the former system, Ohio identified almost four in five schools as either A’s or F’s. With the application of the effect size, fewer schools are now at the “tails” of the distribution: 13 percent were assigned one-star Progress ratings, and 16 percent received five stars. It’s important to note that in 2021–22, schools could post strong Progress ratings even though their students still lagged their prepandemic peers in achievement. If a school did comparatively well (relative to other schools) in moving their students ahead from 2020–21 to 2021–22, it would have received a solid rating.
Figure 3: Distribution of Progress component ratings in Ohio schools, 2018–19 and 2021–22
Historically, Ohio’s value-added system has produced poverty-neutral results that help to identify high- and low-performing schools serving students of varying demographics. That pattern surfaced again in the 2021–22 data. Figure 4 reveals almost no correlation—indicated by the flat regression line—between a school’s value-added index scores and its percentage of economically disadvantaged students. The same holds true for the effect size results.
Figure 4: Value-added index scores (top) and effect sizes (bottom) versus economically disadvantaged in Ohio schools, 2021–22[9]
As for the component ratings, schools of all poverty levels fare about equally well. Table 5 shows that 18 percent of high-poverty schools received five-star Progress ratings, a slightly larger proportion than low- and mid-poverty schools. Conversely, 15 percent of high-poverty schools received one-star ratings, while 9 percent of low-poverty schools did.
Table 5: Distribution of Progress ratings by poverty tier in Ohio schools, 2021–22
Component 3: Graduation
Very little changed within the Graduation component in the recent overhaul, as it continues to rely on both the four- and five-year graduation rates to determine the rating. The grading scale, however, was slightly tweaked—likely an effort to comply with a new provision that requires the state board of education to set grading scales in a way that avoids identifying more than 50 percent of schools in any single rating category.[10] As figure 5 shows, almost half of Ohio high schools received A’s on the Graduation component in 2018–19, so the board slightly raised the bar. This slight adjustment reduced the percentage of high schools receiving five-star ratings to 30 percent in 2021–22.[11]
Table 6: Grading scale for the Graduation component, 2018–19 and 2021–22[12]
Figure 5: Distribution of Graduation component ratings, Ohio high schools, 2018–19 and 2021–22
Component 4: Gap Closing
The Gap Closing component focuses on the outcomes of specific student subgroups that are identified in federal and state law (i.e., economically disadvantaged, English learners, special education, and race/ethnicity). This “disaggregated” look at results helps to ensure that historically disadvantaged students are not being overlooked and their results are not masked by overall school averages. Unfortunately, the structure of the old Gap Closing component was unnecessarily complex[13] and failed to properly account for both the achievement and growth of the various subgroups.[14] The term Gap Closing is also something of misnomer, as it also includes traditionally high-achieving subgroups and doesn’t directly gauge whether gaps are closing or not. There were efforts to change the component name in the reform legislation, but they did not pass.
Though conceptually similar—Gap Closing continues to focus on subgroup outcomes—state lawmakers overhauled the component in the recent legislation. Instead of a complicated scoring system, the new framework deploys a more straightforward methodology that looks at all the subgroups and gives schools credit when they meet an achievement or growth target.[15] Table 7 shows the system in action, using data from Columbus City Schools. To illustrate, the table shows only the English language arts (ELA) achievement and growth results—the same format is used to evaluate subgroup performance in math as well as four-year graduation rates. It also displays five “special” elements within the Gap Closing component, including three that focus on gifted students.
Table 7: Illustration of the new Gap Closing calculations [16][17][18][19]
Given the learning losses and widening achievement gaps, the most surprising result from the 2021–22 report card is the number of schools that received high Gap Closing ratings. As Figure 6 indicates, 61 percent of Ohio schools received four- or five-star ratings on this component and more schools received top marks on this component than in 2018–19, a year when achievement was higher.
Figure 6: Distribution of Gap Closing component ratings, 2018–19 and 2021–22
Two factors—both issues that emerged during implementation—help to explain such results:
First, a soft grading scale buoyed these ratings. While the restructuring of the component made it difficult to project scores and set a grading scale, the state board ended up setting the bar too low. Schools could miss 40 percent of the performance indicators and still earn five stars, and they could miss a majority of them (55 percent) and receive four.[20] Under administrative law, the board must review the component grading scales within the next two years. As part of that process, it should implement a more rigorous Gap Closing scale.[21]
Second, the subgroup PI targets set by the Ohio Department of Education (ODE) are also too low. As table 8 shows, more than 70 percent of Ohio schools met the achievement goal for economically disadvantaged students and more than 60 percent met goals for Black and Hispanic students, despite the learning losses and widening achievement gaps for these groups. While these goals will rise over time, they still don’t set rigorous expectations. None of the subgroup goals for 2024–25 match their pre-pandemic achievement levels, sending the message that the state is willing to tolerate learning losses into the second half of this decade (see the Appendix for the state’s amended performance goals).
Table 8: Percentage of schools meeting math and ELA PI goals by subgroup[22]
Overall, the implementation of the new Gap Closing measure was good but imperfect. Structurally, the component is clearer and simpler, making it easier for the public and policymakers to see which student groups are achieving performance goals and which ones need more help. The balanced emphasis on subgroup achievement and growth is also commendable. State policymakers, however, must increase the rigor of the component by recalibrating the grading scale and increasing the PI targets. This will better identify schools in which all student groups are faring well academically as well as which ones are falling short and require more assistance.
Component 5: Early literacy
Much like Gap Closing, legislators also overhauled the Early Literacy component in the recent reforms. Seeking to create a more comprehensive picture of early literacy, they added two measures to this rating: third-grade reading proficiency rates and grade-promotion rates. These two measures are now incorporated alongside the existing “off-track” progress measure. Schools’ results on these measures are then combined into a weighted average, with greater emphasis given to third-grade reading proficiency, promotional rates, and off-track progress in that order. When schools are exempt from the off-track progress dimension,[23] the weights are 60 percent proficiency and 40 percent promotion rates.
Table 9: An overview of the Early Literacy component[24]
In 2021–22, almost universal grade promotion occurred, as the state waived conventional third-grade reading standards for the year.[25] Just over three in five Ohio third graders passed their reading exams last year, while the off-track reader progress rates were rather low. The sky-high promotion rates offset the poor rates of progress, and the median district and school’s weighted average settled just above the reading-proficiency rate.
Figure 7: Median district and school-level early-literacy rates, 2021–22
Even with the “easy” promotion rate dimension, a slight majority of Ohio schools received one- or two-star ratings last year. High-poverty schools tended to struggle most on Early Literacy, with 63 percent of them receiving one star. As noted above, the disappointing rates of off-track readers’ progress in many Ohio schools account for these ratings—and the grading scale isn’t to blame. In fact, to achieve a three-star rating, a school would need to move just 35 percent of its off-track readers to on-track status, provided they register 95 percent promotion and 65 percent proficiency rates.[26] That performance expectation isn’t unreasonable, especially given the critical importance of ensuring that all children read fluently by the end of third grade.
Figure 8: Early Literacy ratings, Ohio schools, 2018–19 and 2021–22
Table 10: Distribution of Early Literacy ratings by poverty tier, Ohio schools, 2021–22
Coming soon: Overall ratings and an overhauled postsecondary readiness component
The component ratings—and data that underlie them—all help the public, educators, and policymakers dig into a district or school’s strengths and weaknesses. But for many Ohioans, especially those who aren’t as familiar with education metrics, the Overall rating offers an invaluable “bottom line” of school quality. Although the 2021–22 report card did not feature an Overall rating, it will return next year, having last appeared in 2018–19.
Lawmakers did tweak the methodology for combining the results into a single mark. The new weighting system rightly places more emphasis on the core Achievement and Progress components. By law, they are now given twice the weight of the other components, whereas they had previously received about 1.2 to 1.3 times the weight of the others.[27] Based on their component results, schools will receive a total number of “overall points,” which will then be used to determine the summative rating.[28] Unlike the component ratings, which are given as whole stars, the Overall mark will include half-star ratings.
Table 11: The Overall rating system for school districts and selected school grade spans
Further out, Ohioans can look forward to an overhauled College, Career, Workforce, and Military Readiness (CCWMR) component. This is a revamped version of the component formerly known as Prepared for Success that will include new indicators of college and career readiness, such as how many students enlist in the military after high school and how many students complete an apprenticeship during high school. Although Ohio released relevant data in 2021–22—e.g., ACT or SAT remediation free, industry credentials, and AP or IB pass rates—it did not use them to generate a component rating. ODE is tasked with reviewing these data and proposing a grading scale for the component; a legislative committee is responsible for approving it as a rated component starting in 2024–25.
Conclusion and recommendations
With legislative reforms now on the books, the 2021–22 school year marked a new day for Ohio’s school report card. This iteration promises fairer and more accurate school ratings and—with any luck—less political contention about the system. Through the five-star rating system, the updated framework continues to offer Ohio parents and the public a clear picture of schools’ academic quality. For policymakers and community leaders, the report card offers a trove of data and multiple ratings that allow them to track trends and gauge the strengths and weaknesses of local schools.
Overall, the new report card is big step in the right direction for Ohio. Hence, our first recommendation to state lawmakers is this: Please stay the course. On too many occasions, Ohio has adopted promising education policies only to backpedal in the early years of implementation. We hope this doesn’t happen here. To their credit, Ohio lawmakers, state board members, and Department of Education leadership have worked together to design and implement a high-quality report card, perhaps one of the finest in the nation. It would be a grave disservice to Ohioans if these efforts were weakened or undone by future policymakers.
Second, Ohio should follow through and complete the full implementation of the College, Career, Workforce, and Military Readiness component by making it a rated element starting in 2024–25. With significant changes to this dimension, the legislative reforms smartly call for a transitional period in which postsecondary readiness data are reported but not used to produce ratings. In order for a rating to appear in fall 2025, statute requires the Joint Committee on Agency Rule Review to approve ODE’s system for implementing this rating. When up for review, we urge approval. The component rating will offer the public a clear sense of how districts and high schools fulfill one their core academic missions—to ready young people for their next step in life, whether college or career. Approval will also ensure that postsecondary readiness contributes to the Overall rating of a district or high school.
Third, the state board of education and ODE need to increase the rigor of the Gap Closing component. This element plays an integral role in the report card, as it ensures that schools pay attention to academic needs of all student groups. Although the reforms establish a stronger Gap Closing framework, its implementation was imperfectly executed. To strengthen the component, the state board of education should increase the performance standards (i.e., “cut scores”) that schools must achieve to earn top Gap Closing ratings. In addition, ODE should also reevaluate its low subgroup PI targets. With these tweaks, a more honest picture of how effectively schools are serving all students should begin to emerge.
For two decades, the report card has shed light on how Ohio’s 1.6 million students fare academically and how effectively the state’s roughly 600 districts and 3,300 public schools move the achievement needle. That sunlight is still needed today, as achievement gaps continue to persist (and have even widened, in the post-pandemic era) and thousands of students still struggle to exit high school with the academic skills necessary for college and career. Fortunately, Ohio’s policymakers have recognized the critical role of the report card and worked hard to strengthen it. Their efforts have created a better-functioning report card that now offers parents and the public a clearer and better look at academic performance across Ohio. That’s something we can all cheer about.
Acknowledgments
I wish to thank to my Fordham Institute colleagues Michael J. Petrilli, Chester E. Finn, Jr., Chad L. Aldis, and Jessica Poiner for their thoughtful feedback during the drafting process. Jeff Murray provided expert assistance in report production and dissemination. Special thanks to Pamela Tatz who copy edited the manuscript and Andy Kittles who created the design. All errors, however, are my own.
[4] To be in the Advanced Plus category, students must be on a formal acceleration plan, take an above-grade-level assessment, and achieve a score of advanced. Just 0.9 percent of Ohio students achieved this level in 2021–22.
[5] The district top 2 percent average is used for district ratings; the school-level average is used for individual school ratings.
[6] In 2018–19, the statewide average index score was 84.7; in 2021–22, it was 79.3. The “grading scale” for the PI did not change during this period and doesn’t explain the bump in ratings.
[8] Under the former system, the cut points for value-added ratings were index scores of A = +2.0 or above; B = +1.0 to +2.0; C = -1.0 to +1.0; D = -1.0 to -2.0; and F = -2.0 or below.
[9] Districts must achieve an effect size of +0.1 or above to receive five stars and -0.1 or below to receive one star.
[11] Lower graduation rates are also a possible explanation, though the statewide four-year graduation rate was higher in 2021-22 than in 2018-19 (87 versus 85 percent), as were five-year rates (89 versus 86 percent).
[12] The previous report-card system relied on a “points” system to combine the four- and five-year graduation rates into a composite score to determine the component rating, so I calculate the weighted four- and five-year average graduation rates that are equivalent to the new system. The “cut points” for the old system are available at ODE, Appendix B: ESSA Sections A.1-A.4.
[13] The old system employed a “points” type system that awarded full credit if a particular subgroup achieved PI and graduation rate targets and then, through a complex calculation, provided “partial credit” if a subgroup missed the target but demonstrated year-to-year improvement on these measures. A full description of the former Gap Closing component is available at Ohio Department of Education, 2019–2020 AMO Gap Closing Measure (Columbus, OH: Ohio Department of Education, 2020).
[14] In the old system, a school that failed to achieve a PI goal could receive full credit if that subgroup had a value-added index score of +1.0 or above. The problem with this type of “either-or” system is that it allows poor student achievement or growth to be ignored.
[15] ODE significantly adjusted downward the PI goals in response to pandemic learning loss, and those goals are found in its amended ESSA plan. The goals, however, will gradually increase each year. The value-added goals are tied to the Progress rating system and do not change from year to year. Detailed information about the current Gap Closing component is available at Ohio Department of Education, 2021–2022 School Year: Gap Closing Component—Technical Documentation (Columbus, OH: Ohio Department of Education, 2022).
[16] No partial points are awarded, including for the special elements that are worth five points.
[17] The value-added effect size is not applied in the subgroup growth system.
[18] The chronic absenteeism element was not included in the 2021–22 Gap Closing calculations but will be added starting in 2022–23. It will be worth five points. Schools can meet the chronic-absenteeism indicator by either achieving a rate below the state’s target for the year or posting lower rates compared to the year prior.
[19] The English learner: Alt. assessment element is a federally required measure looks at the percentage of English learners who make progress on an alternative literacy assessment.
[20] The Gap Closing grading scale was overhauled in the new report card. The scale is as follows: 5 stars = 60–100 percent; 4 stars = 45–60 percent; 3 stars = 30–45 percent; 2 stars = 10–30 percent; and 1 star = 0–10 percent.
[22] Only one school had enough Native American or Native Alaskan students to have a PI score reported for this subgroup
[23] Under state law, if less than 10 percent of a school’s Kindergarten students are deemed off-track, this element does not apply.
[24] Ohio law exempts certain English learners and special-education students from the Third Grade Reading Guarantee’s grade promotion standards. In 2018–19, the most recent year the promotional requirements were in effect, 6.2 percent of third graders were exempt. For more about the Guarantee and its promotional standards, see Ohio Department of Education, Guidance Manual on the Third Grade Reading Guarantee: School Year 2019–2020 (Columbus, OH: Ohio Department of Education, 2020).
[25] Third-grade reading promotional standards go back into effect in 2022–23.
[26] The weighted average in this example is 68 percent. The Early Literacy grading scale is as follows: 5 stars = 88–100 percent; 4 stars = 78–88 percent; 3 stars = 68–78 percent; 2 stars = 58–68 percent; and 1 star = 0–58 percent.
[27] For the weights used in the old Overall rating system, see Aaron Churchill, “A new day for Ohio’s school report cards,” Ohio Gadfly Daily, Thomas B. Fordham Institute, July 1, 2021.
First things first: Looks like a deal was reached between Akron City Schools and its teachers union at some point over the weekend. A strike has been averted and school is on today. Yay. (Cleveland.com, 1/9/23)
Speaking of newspaper reporters: It seems that the Dispatch’s Megan Henry got tons of information to help guide her understanding of what IEPs are and how they work in schools (or don’t work, as the case may be) for this piece. Feels like there are still some outstanding questions even after this coverage, but that might just be me. (Columbus Dispatch, 1/8/23)
Moving firmly into the new year, I know you are all on pins and needles to find out how Columbus City Schools’ new busing plans have worked out so far. Here is a look at what is termed Day 1 of the new routing plan on Wednesday. It does not sound super great to me. But we all know that Wednesday was actually Day 2 and that the real Day 1 was Tuesday, when many charter and private schools in the area started back after the holiday break. While the morning routes for charter students sound like a bit of an improvement over 2022, by the evening things already appeared to be tanking.(Columbus Dispatch, 1/4/23) Day 3—the realDay 3 yesterday—does not sound all that much better for either district or charter students based on this coverage. Interestingly, lack of drivers is not an issue anymore—district officials stress that they have way more drivers than routes on their roster as well as hundreds of additional routes covered by a third party contractor. So, causes of the documented problems must lie elsewhere. District officials quoted here seem to have some firm ideas where. (Columbus Dispatch, 1/5/23)
School closures are awful. I won’t argue otherwise.
But they are almost certainly on the horizon. Due to enrollment shifts and falling birth rates, many districts nationwide are experiencing a surge in empty seats. For a few years, federal funding tied to pandemic recovery may allow districts to delay difficult consolidation decisions. However, there will come a time when the expense of staffing, maintaining, and operating an outsized number of schools becomes untenable—and closures will be the only option.
The numbers tell the same story in city after city: Just 60 percent of the available placements in Indianapolis are occupied. After shrinking by several hundred thousand students since 2000, Los Angeles expects to lose another 28 percent of its enrollment over the next eight years. Shifts in Boston have left the district with the equivalent of 16.5 unused school buildings. Chicago, which famously closed fifty schools in 2013 under Mayor Rahm Emanuel, subsequently self-imposed a five-year moratorium on closures. Then, in 2021, a new state law prohibited closures and consolidations until 2025. Meanwhile, enrollment has plummeted. In fact, Chicago has 80,000 fewer students than it did in 2013. This school year, district data show over forty schools with fewer than 200 students.
We are confronted by a national wave of enrollment decline in our urban systems.
Unpleasant though school closures may be, there are steps leaders can take to mitigate their negative effects on families. The most irresponsible approach is living in denial even when closures have become inevitable.
Denver Public Schools recently provided a high-profile example of what not to do. Its fiasco began in June 2021 when the the Denver school board passed a resolution directing district leaders to address declining enrollment. A list of ten schools was eventually identified for closure—including some that required additional subsidies of more than $500,000 each school year to maintain basic services for students, due to their small size. After community pushback, Superintendent Alex Marrero Marrero cut the list to five. By the time the school board was set to vote on the closures, Marrero removed three more schools, leaving just two. The board ultimately did not vote to close any schools and sent Marrero’s team back to the drawing board. To sum up, the district spent eighteen months to make no decision at all.
It breaks my heart to see schools close. Most often, the effects are felt primarily by lower-income families. Neighborhoods lose beloved institutions and vital pieces of their social fabric. But there comes a point when it is no longer responsible to consume an outsized share of scarce public resources to provide subpar educational experiences in near-vacant schoolhouses.
Districts would be well advised to make closure decisions early and as judiciously as possible—and communicate how and why those decisions are made to families. Then comes the most important part: executing closures in a way that’s least disruptive for affected families. Done well, there is potential for students to land in better schools—and research has shown that when this happens, students often fare better academically than they did in their former placements.
I have some personal experience with a promising approach to school closures in New Orleans, where my organization, EdNavigator, has done work since 2015. Families in closing schools there receive two critical pieces of support. First, students are given preferential treatment in the city’s open-enrollment system. Essentially, if a vacancy exists, a student from a closing school has the first opportunity to claim it. This preference increases the likelihood that a student will land in a desired school rather than being defaulted to whichever school has excess capacity, as happens in many districts. (In some cases, this means that students in low-performing closing schools end up getting shunted to other low-performing schools that might be next on the closure list themselves.)
Second, our Navigators offer personalized counsel to families on which schools they should list on their applications. This is particularly important because families did not choose to leave their current placements. It is unlikely that they have been keeping tabs on alternative options.
Research published in 2022 suggests that these interventions make a difference. Families requested and were assigned to higher-performing schools than students in comparison groups, and they were more likely to remain in those placements over time. Initial results for student learning showed improvement in both reading and math during the first year students attended their new schools.
My advice to cities grappling with falling enrollment is to begin planning now. Engage in robust processes to take community input on which schools will close and when. But do not drag your feet hoping for a miracle that saves you from the scourge of closures altogether. That miracle is not coming. Instead, invest your time and resources in helping families transition, as New Orleans has done. Give families a real voice in determining their child’s new placement—and offer assistance in the pursuit of seats in charter schools, as well as traditional district schools. Moments like these are not the time to resurrect fruitless district-charter wars. Then, follow students closely as they acclimate to new buildings to ensure they aren’t lost in the shuffle, that their social work and special education services transfer seamlessly, that they make new friends.
The pandemic has already done enough harm to our children, in and out of school. We will only make matters worse by mismanaging our response to the enrollment shortfalls that show no signs of abating.
Over the past year, one of the most heavily debated topics in Ohio education has been the retention provision of the Third Grade Reading Guarantee, a decade-old package of early literacy reforms. Under the retention policy, schools must hold back students (with limited exceptions) who are struggling to read at the end of third grade and provide them intensive literacy supports. This requirement aims to ensure that all children have foundational reading skills before they are asked to tackle more challenging material in the middle and upper grades.
Despite the sound rationale, critics have long decried the policy as being hurtful to retained students. Their claims are often based in anecdote and crude interpretation of data. In November, the State Board of Education—a body that has been hostile to retention—presented data showing that less than one in six retained students achieve the state’s reading proficiency target in subsequent years. Based on these numbers, board members argued that the policy “has not achieved the desired result” and passed a resolution asking the legislature to scrap the requirement (which lawmakers have, so far, not done).
Yet such a brazen condemnation of Ohio’s Reading Guarantee is hardly warranted based on these data. Instead, as the debate continues, policymakers should heed more credible evidence about the effectiveness of retention, including a brand-new study that examines Indiana’s third-grade retention policy, to which we return a few paragraphs hence.
Let’s first review some problems with using raw proficiency numbers to make judgments about retention.
For starters, retained students could be making good progress in later grades, but focusing only on their “proficiency”—a relatively high bar that roughly 40 percent of Ohio students fall short of—would overlook those gains. Obviously, ensuring that every student is a proficient reader is an important goal for schools. But progress toward proficiency matters, too. Perhaps a retained student is moving from the 2nd to 15th percentile by fifth grade. That type of growth should also be part of any evaluation of the retention policy. Moreover, the raw numbers lack any context that could help us understand the actual impact of retention. How do retained students perform relative to other low-achieving students who narrowly pass the reading requirement? Do they make more or less progress than their close counterparts? Answers to such questions would provide a clearer picture of whether retention is better for low-achieving students than the alternative of “socially promoting” them.
Unfortunately, a careful evaluation of Ohio’s third-grade retention policy has not yet been undertaken. That should certainly change. But there has been strong empirical work from Florida that uncovers positive effects of retention under its early literacy law (those findings are discussed in an earlier piece). A recent report published by the Annenberg Institute at Brown University also reveals positive impacts of third grade retention in Indiana.
The analysis was conducted by Cory Koedel of the University of Missouri and NaYoung Hwang of the University of New Hampshire. Akin to Ohio’s and Florida’s reading policy, Indiana requires third graders to achieve a certain target on state reading exams in order to be promoted to fourth grade. The policy went into effect in 2011–12 and the analysts examine data through 2016–17. Using a “regression discontinuity” approach, Koedel and Hwang compare the fourth through seventh grade outcomes of retained students to their peers who just barely passed Indiana’s promotional threshold. This methodology (also used in the aforementioned Florida study) provides strong causal evidence—almost as good as a “gold standard” experiment—about the effect of holding back low-achieving third graders.
Here are Indiana’s impressive results:
In fourth grade, retained students achieve much higher state exam scores—in both math and reading—than students who just barely passed the promotional threshold in third grade. The academic boost for retained students persists through seventh grade, though the magnitude of the impact somewhat fades over time.
The results are consistently positive across student groups, with the average Black, Hispanic, and White student experiencing gains from retention. Likewise, both economically disadvantaged and non-disadvantaged retained students post higher subsequent scores than marginally promoted peers.
The study finds no significant impacts of retention on disciplinary or attendance outcomes—a finding that helps to alleviate concerns that retention demotivates students or leads to negative behavior at school.
The authors conclude, “Taken on the whole, our findings of positive achievement effects of the Indiana policy, coupled with the lack of negative effects on attendance and disciplinary outcomes, suggest grade retention is a promising intervention for students who are struggling academically early in their schooling careers.”
When lawmakers passed the Buckeye State’s Third Grade Reading Guarantee more than a decade ago, they did so because they recognized the importance of early literacy to students’ long-term success. Dropping the retention provision of the guarantee based on anecdotes and flimsy data would be reckless, potentially leaving thousands of Ohio students at risk of not receiving the extra time and support they need to read fluently. Holding back third graders struggling to read has worked in other states. It can work—and may very well be working—in Ohio, as well.
The Wall Street Journal’s regular Tuesday columnist, the wise, erudite (and British) Gerard Baker, kicked off 2023 by urging humility. Let us not resolve too much, let us abjure absolute certainty, and let us shun “the binary mind-set...
...that has seized our thinking about the kind of society and world we think we should live in. This is the mind-set that insists that every issue and question we confront poses an existential threat to our way of life if we don’t select the “right” choice. At this time of year especially, as we look back at the usual mix of things we got right and wrong, and forward to our hopes and plans, can we at least acknowledge the complexity of a world where our certainties proved so uncertain?
Humility is not perhaps the quality for which I’m best known. (You could double check with my family.) But I applaud Baker for reminding us that it’s wrong to be too certain that one is right and the other guy is dead wrong, that it’s possible for things to be partly true, for two seemingly opposed things to both be true and not entirely opposed, and that we’d be better people and live in a better society if we avoided either projecting absolute certainty about much of anything or demanding total acceptance of our most certain views by others.
Let me therefore launch the year with an octet of dualisms, eight pairs of statements that relate to education that I believe are both at least partly true, even though they seem to state opposing views (or realities).
1a: The pandemic was an unmitigated disaster for American education.
1b: Coming out of the pandemic, important elements of American education are savvier and more flexible than before.
The learning losses that our K–12 system now grapples with are grave, menacing—and as yet we’re not doing a great job of rectifying them. But we also watched thousands of schools adapt to the challenge, remain open while taking precautions, and continue to educate their pupils, often in innovative ways that blend technology with in-person instruction. We watched the swift invention of new forms of schooling, new choices, and the exercise of unprecedented engagement and interest by parents. We also saw political energies engaged that, while sometimes worrying, mostly feel like fresh breezes blowing through the stuffy corridors of districts and schools.
2a: The more school choice the better.
2b: A lot of schools of choice aren’t worth attending.
Kids need schools that work for them and alternatives to schools that don’t, and parents need the capacity to match their daughters and sons with the best possible schools. Despite impressive expansion of increasingly diverse school-choice opportunities in much of the country, millions of U.S. families still lack access to alternatives. So more school choice is needed. Yet too many schools—including “schools of choice,” whether district-operated, charter, or private—have displayed year after year of achievement and student-growth results so dismal that families should shun them. And too many of those schools mask their academic failures behind boasts and come-ons designed to attract kids and parents who aren’t good choosers.
3a: It’s time to focus more on what’s taught, less on the structures and governance of schooling
3b: Until we reform K–12’s archaic structures and rigidities, effective teaching and learning cannot happen.
Quality curriculum aligned with ambitious standards is fast emerging into the ed-reform sun, and that’s a good thing. Structures and policies alone don’t teach anybody anything. But—as painfully displayed in many places by districts’ child-averse reactions to the pandemic and inept, sluggish, and half-hearted efforts to overcome learning loss—the principal structures through which K–12 education is delivered in the U.S. are inflexible, authoritarian, and adult-centric. Local school districts overseen by locally elected school boards rarely put kids first, cannot flex when circumstances change, and are wedded to practices—Carnegie units, for example, year-long age-based grades, and six-hour days—that effectively block both teachers and students from the successful delivery of essential learning.
4a: Standards, testing, and accountability have run their course, done all they can, and done some harm along the way. Let’s move on.
4b: It’s time to double down on quality academic standards and results-based school accountability.
American education got some needed boosts from two-plus decades of the trinitarian reform strategy of standards, assessments, and results-based school accountability. Along the way, we learned some lessons about collateral damage: teaching to the tests, for example, narrowing the curriculum, mislabeling good schools, neglecting advanced learners, and more. We also saw more clearly how many of the things we want schools to do for kids—character building, persistence, tolerance, self-discipline, and more—cannot be gauged via standardized test scores. So, yes, we need a more diverse array of targets, metrics, criteria, and remedies. But there’s a baby in that bathwater that must not be discarded: the centrality of children mastering the core curriculum in ever more sophisticated ways so as to be well prepared for career, college, and citizenship. For that to happen, we must persist with ambitious academic standards, reliable means of gauging progress toward them, and rewards and interventions for schools depending on how well they move their pupils—all their pupils—toward mastery.
5a: We’ve oversold “college for all” and need alternate pathways for kids.
5b: No educator doesn’t want their kids to graduate from four-year colleges.
Too many young Americans see nothing in their K–12 schooling that isn’t pointed toward college, and a lamentable fraction of them end up in colleges they don’t necessarily see the point of attending and for which they’re ill-prepared to succeed. (Cue drop-outs and debt-burden.) We’ve overpromised and underdelivered when it comes to college-going, and millions of kids would benefit from high-quality alternatives such as apprenticeships and sophisticated CTE. Yet the U.S. education system has been hesitant and sluggish for at least three big reasons: it’s set in its ways and loath to take on the costs and disruptions of retooling the K–12 sequence in fundamental ways; it’s wary of “tracking” of any sort (though also thoroughly hypocritical on this point); and pretty much everyone in it is a four-year-college graduate (or more) who expects their own kids to do likewise. I’ve never seen a truly satisfactory answer to Howard Fuller’s penetrating query: Whose are the kids you think should not go to college?
6a: Teachers should be paid better.
6b: Until we right-size, upgrade, and redeploy the teaching force, teacher pay will remain mediocre.
School teachers are the largest single workforce in America, close to four million strong, more than any other occupation. And their numbers have grown far faster than the number of kids in their schools. (Today’s national teacher-student ratio is about half what it was when I attended elementary school.) Moreover, they’re employed in an industrial-style system, deployed and compensated without regard to expertise or performance, and locked into step-and-ladder salary structures. Yes, great teachers—the vast majority of teachers—should be paid more. But for this to happen in more than a marginal way, we’ll need a total overhaul of the K–12 HR system. Of course, there’s enormous resistance to any such thing—and very little reformist zeal to make it happen.
7a: Right and left are locked in an epic culture war over civics and history.
7b: There’s a lot of latent consensus waiting to be recognized and implemented.
A series of recent polls and surveys have shown widespread agreement across most of the U.S. public—parents included—regarding the essential content of civics and history education. It crosses party lines and goes well beyond “basic facts.” What’s more, any number of organizations have been working hard to develop better standards and curricula for these essential subjects. Most Americans want schools to teach “the whole story,” both triumphs and failures, and to engage kids with complex issues, not just stuff them with information. Yet professional culture warriors seem to take delight in faulting everybody else’s products—as well as their motives and perhaps their mothers-in-law—and in a politically divided country, each “side” suspects the other of being far more extreme than it actually is. One of the more sobering things I’ve read in ages is this from a recent report by “More in Common”:
One of our most notable findings is that both Democrats and Republicans alike grossly overestimate whether members of the opposing party hold extreme views.... Many Republicans may believe most Democrats want to teach American history as a history of shame, guilt and a repudiation of our founding figures—but we found that is not the case. Many Democrats may believe most Republicans want to teach American history in a way that glosses over the injustices of slavery and racism—but we found that is also inaccurate.
8a: SEL is a Trojan Horse for political indoctrination.
8b: For eons, schools have worked to teach valuable social and emotional skills to their pupils—and that should continue.
Every responsible educator knows, and has known forever, that kids don’t learn much when they’re upset, anxious, bullied, or traumatized, any more than when they’re cold or hungry. Every responsible educator also knows that good schools do their utmost to instill in their pupils self-control, sound values, tolerance, sportsmanship, and other vital social and emotional skills. In those important ways, SEL is old news, but because it’s not done well in every school or classroom, we benefit from renewed attention to it. Yet it’s slippery stuff that can easily distract schools from their core academic obligations and can slide into a negation of solid school discipline and good manners. As we found in a recent Fordham study, terminology alone can be worrying to parents, most of whom prefer a focus on specifics (e.g., teaching tolerance) rather than neologisms and abstractions.
—
You may not agree in all eight instances that both statements are least partly true, but I’ll wager that you think that about most of them. My thanks to Gerard Baker for a much-needed reminder that we’ll be better off if we acknowledge how seldom is there just one correct way of seeing things. It’s exhilarating to mount a soapbox and shout “I’m right and they’re wrong.” But so often that’s not true, it’s not fair, and it’s not helpful. Greetings, 2023.
As one article at National Affairsput it, the cries about a nation-wide teacher shortage are “heavy on anecdote and speculation” but rather light on data. According to Derek Thompson at The Atlantic, there is no real teacher shortage. He argues that claims otherwise come from two misleading data points: unfilled positions newly created with pandemic funding or an increase in teachers considering leaving, but not following through. Some districts are struggling to fill all positions, yes, but these are localized problems and not a nationwide catastrophe.
That being said, there is an bona fide but often unaddressed teacher shortage: experienced teachers in charter schools. In the United States, a third of charter teachers have fewer than three years of teaching experience, compared to only a fifth of public school teachers. I’m in such a school and approaching thirty makes me a veritable octogenarian.
Comparative inexperience and youth in front of classrooms carries costs. More than any other school-related factor, a teacher’s efficacy matters most to student learning. And especially in the early years, nothing improves a teacher's efficacy quite like experience. Inexperienced teachers are often ineffective teachers. What’s more, as chronicled at Success Academy in Robert Pondiscio’s superb How the Other Half Learns, this high turnover rate leaves charter schools exhausting resources and administrative time training new hires and managing the shortcomings of inexperienced educators.
Nor is this problem confined to individual charter schools. Teach For America, a major supplier of instructional oomph for charter schools, only requires a two-year commitment, and thus is notorious for high turnover rates. One study that compared traditional teacher prep programs to alternate routes like Teach for America found that TFA would be more effective if it weren’t for “the negative effects of high exit rates.” Another recent study found that TFA teachers are demonstrably more effective five years in and this “performance advantage is large enough to offset turnover costs.” Even so, turnover remains a significant obstacle.
Two researchers, Jennie Weiner and A. Chris Torres, ran an interesting survey on the professional identity of charter school teachers. Their sample size was small, but the interviews were extensive, and so provides an interesting look into the psychology of these teachers and why they leave.
They found that many charter school teachers were drawn to the sector because teaching itself tends to lack cachet but charter schools are considered “elite positions.” The schools intentionally draw from driven, high-performing youth by appealing to their desire for intellectual challenge and prestige.
Despite their high ideals, unfortunately, burnout quickly took over. It’s comparatively easy to work sixty- to eighty-hour weeks while a single, childless adult. But many teachers in the survey wondered if “charter work was sustainable” after “typical adult milestones” like marriage or children. Pondiscio notes the comparative youth of teachers at Success Academy. There, even a few years of experience makes one a veteran.
These difficulties, however, are only amplifications of general trends in the larger education sector. In the National Affairs piece noted earlier,Andrew Biggs and Jason Richwine point out that the most common reason for a teacher leaving the profession is “personal life factors.” Common media preoccupations like salary account for less than 5 percent.
Even as I write about the need to retain teachers, I have to check myself. Perhaps there are some positive tradeoffs to this high turnover. As Catherine Worth has written in Fordham’s own pages, not all teachers are created equal, and where traditional public schools struggle to fire the duds, it’s possible that this rapid turnover in charters shakes off the least effective teachers. Also, if charter networks are achieving success by capitalizing on the excess energy and time of aspiring youth, who am I to question that model? Why not replicate success rather than criticize it?
That being said, there may be a happy medium—policies and environments that draw aspiring candidates and retain them long. With school choice growing in popularity nationwide and some states moving to fully fund charter schools—institutions that typically function on a lower per-pupil dollar amount—there may be resources to better balance these tradeoffs. For example, where pay doesn’t much affect teacher turnover more generally, charter teachers receive far less compensation on average, and I’ve watched colleagues switch to traditional public schools for this very reason as they age and acquire more financial responsibility. With additional funds, charters could adopt more aggressive pay scales, competitive wages, or hybrid leadership positions wherein the most effective educators split their time teaching and coaching new hires.
Similarly, if demands on time push many teachers out, with increased funding, schools could hire additional staff so that teachers have more prep time during the day—thereby spending less time outside of the building making and planning—or teaching assistants who can help with mundane work like copies or rote grading.
Finally, many teachers prefer the private and charter sector because of the relative peace and order in the buildings. It is a primary draw to their buildings, and so essential that they maintain it. Weiner and Torres’s survey tells of traditional public school teachers suffering from “chaotic, crazy, and overwhelming” environments where seemingly all teachers could do was spend energy “putting out fires.” Teachers in these traditional public schools were alone—no support with student discipline, early career coaching, or anything of the sort. Charter schools are known for their rigid discipline structures and instructional coaching. If they lose that—say, through progressive pressureinto lenient discipline policies—they lose one of the major draws to the sector.
As a young teacher myself, weighing the demands of a no-excuses style school, I’ve always been happy with my pay. My frustration has come from managerial incompetence, lack of prep time, needless paperwork, bureaucratic hoops, and behavioral chaos. Teacher retention and support looks like more than a 2 percent pay bump every year and some cards on the holidays.
By now the unfinished learning that resulted from the Covid-19 pandemic is old news. Compared to Spring 2019 levels, our colleagues Karyn Lewis and Megan Kuhfeld found that, in Spring 2022, same-grade students are scoring about 3–4 percentile rank points lower in reading and 5–10 percentile rank points lower in math. NAEP 2022 showed similar findings. Unfortunately, students of color and those attending high-poverty schools were harmed even more by the pandemic. Black and Hispanic third graders fell 6 percentile points in reading and 10 and 9 percentile points in math. The overall achievement distribution in the United States has shifted downward with students of color shifting even more than average.
As educators are now more aware of the scope and nature of unfinished learning, what’s also becoming apparent is the implications it has on many traditional policies and practices that were designed in a pre-pandemic world. A perfect example of such policies and practices are the identification and placement criteria for advanced learning opportunities, such as exam-based high schools, gifted and talented programs, or even individual courses like seventh-grade algebra. It might be intuitive to think that a drop in scores due to the pandemic will mean fewer students meeting traditional readiness benchmarks, and that’s likely true, but there are also important equity implications.
Many program placement or intervention criteria rely on predetermined cut scores or normative percentiles. Simply put, they identify students for placement if they score in the top or bottom X percent of a given normative sample or if they meet a given cut score (e.g., a MAP RIT score of 200). In states like Arizona, Oklahoma, Nevada, West Virginia, and Florida, students must meet certain national percentiles to be identified for gifted and talented programs. However, most if not all the normative samples to which students will be compared were collected before the pandemic. As a result, when schools give assessments during the 2022–23 school year, many of the test reports that schools and parents look at will include percentiles that contextualize scores based on pre-Covid performance.
Given the decrease in overall student achievement, schools that make advanced learning program placement decisions in 2022–23 (and beyond) are likely to see smaller and less-equitable student populations compared to prior years. To test this hypothesis and quantify the change, we dove into third grade MAP data from spring 2019 and spring 2022. We wanted to know how the demographic profile of students meeting the 90th percentile threshold (as established in the NWEA norms) in spring 2019 compared to that of those who met the same threshold in spring 2022. To do so we relied on something called a representation index (RI), which is the proportion of students from a particular demographic group within the 90th percentile group divided by their proportion in the overall student population. For example, if White students were 50 percent of those who scored at or above the 90th percentile, but 40 percent of the overall student population, this would yield an RI of 1.25 (0.50/0.40) and signify proportional overrepresentation within the 90th percentile. Our analyses relied on data from roughly 1.5 million students who completed a MAP Growth test in spring 2019 or spring 2022. The math results are presented visually in Figure 1 with the specific RI values for math and reading at the end of the article in Table 1.
Figure 1. Change in the 90th percentile representation index in math by race/ethnicity, 2019 and 2022
First, Figure 1 makes clear that even prior to the pandemic in 2019, there was already substantial inequity within the 90th percentile. For example, Table 1 shows Hispanic students were 45 percent as represented (RI = 0.45) as they were in the overall student population. Similarly, Asian students were 274 percent as represented as they were in the overall student population (RI = 2.74). Unfortunately, this only became worse after the pandemic. Figure 1 shows that, in spring 2022, the population of third grade students meeting the 90th percentile in math was less Black and Hispanic and more White and Asian than it was in Spring 2019. Black student representation decreased from a 2019 RI of 0.29 to a 2022 RI of 0.24—a drop of 20 percent. Hispanic students also become less represented—moving from an RI of 0.45 to 0.35 (a decrease of 15 percent). Meanwhile, White students (RI of 1.33 to 1.35) and Asian students (RI of 2.74 to 2.93) became even more overrepresented within the top 10 percent of math achievement. This trend was also present in reading (see Table 1) but was not as extreme as in math.
Importantly, just because fewer Black and Hispanic students scored at the 90th percentile does not mean they don’t need services. Talented students of color very much still exist, but because of the pandemic, the percentile their score represents is now lower. Consider a hypothetical student of color who would have scored at the 91st percentile, had Covid never happened, might now score at the 89th percentile. That’s a seemingly small change in the student’s achievement, but because they fell below a fixed threshold, they now don’t meet the criteria for a particular advanced learning opportunity and the small change becomes quite consequential, given many educational decisions are based on often-fixed, often-arbitrary percentile cut scores.
What does this mean for schools?
Around the country, K–12 gifted programs and exam-based high schools were already struggling with equity prior to the pandemic. A 2019 study found that students in the top 20 percent of socioeconomic status were identified as gifted at a rate of about 12 percent compared to just over 2 percent for their peers in the lowest 20 percent. Unfortunately, our analyses suggest that, when schools go to identify students for these opportunities in the 2022–23 school year, disproportional representation is likely to get worse. Although few advanced learning opportunities identify students solely based on a single data point (and we do not advise using MAP Growth scores alone to make such decisions), if our findings hold for other assessments and criteria, students of color will be identified for advanced learning opportunities at even lower rates than they were prior to the pandemic (and again, those were already very bad). Importantly, this is not something that is unique to any single test, nor does it represent a failure of any test. Instead, is it an artifact of a massive and unprecedented interruption in learning that was also experienced unequally by students from traditionally marginalized groups. Combined with the common practice of reliance on fixed and often inflexible cut scores for program admission, schools arrive at a perfect storm of inequity.
What should schools do?
One of the simplest ways schools could approach this challenge with an eye toward improving equity would to rely on district or school building norms to make such placement decisions using their most recent data. Conceptually, this means identifying students for advanced learning opportunities if they rank in the top X percent of students in their district or school as opposed to the top X percent of the nation. Doing so removes the influence of pre-Covid national norms from the equation, will make the proportion of students identified stable from year to year (and school to school), and will make the students identified more representative of the larger student population. School districts like New York Cityand Fairfax County (including Thomas Jefferson High School for Science and Technology), as well as the states of New Jerseyand Illinois, have already implemented local normative criteria into their decision-making process prior to the pandemic. There are other conceptual and equity benefits to this practice, but their equity benefits alone make them worth considering.
Perhaps more importantly, schools should avoid the temptation to focus their Covid recovery efforts solely on minimal proficiency of grade-level standards. To be sure, struggling students require urgent intervention to recover from the pandemic. But ignoring the needs of advanced learners, particularly those of color or who are from low-income families, will only perpetuate pre-Covid inequalities in advanced learning opportunities. Instead, schools should see this as an opportunity to implement talent development activities that would mitigate these disparate rates of advanced achievement. Much attention is being paid to efforts to help students rebound from the pandemic. But if those efforts focus solely on rebounding to minimal proficiency, the worsening equity we describe above risks becoming a permanent fixture of American education.
Table 1. Representation indices for spring 2019 and spring 2022 by student subgroup
Editor’s note: This was also published as a guest article in an edition of “Advance,” a newsletter from the Thomas B. Fordham Institute written by Brandon Wright, our Editorial Director, and published every other week. Its purpose is to monitor the progress of gifted education in America, including legal and legislative developments, policy and leadership changes, emerging research, grassroots efforts, and more. You can subscribe on the Fordham Institute website and the newsletter’s Substack.