The Covid-19 pandemic has run roughshod over so much of our education system, closing schools, sending students home to try to learn remotely, and obliterating last year’s summative state tests. One consequence of that cancellation is that, even if students are tested this spring, it will still be impossible to construct typical measures of their learning growth, as most such measures incorporate the previous year’s score. As fanatics for student growth measures—given that they are the fairest and most accurate metrics of schools’ impacts on achievement—those of us at the Fordham Institute wanted to know if some kind of value-added calculations could still be produced despite the testing gap year. Such measures would provide helpful information about which districts, schools, and students have progressed the most and which have experienced the worst Covid-induced learning loss during the pandemic. That would help us identify schools and practices worth emulating, and highlight institutions where students need the most help once the pandemic is behind us.
Fordham’s new report, Bridging the Covid Divide: How States Can Measure Student Achievement Growth in the Absence of 2020 Test Scores, provides the answers. We turned to a team of researchers in the department of economics at the University of Missouri—Ishtiaque Fazlul, Cory Koedel, Eric Parsons, and Cheng Qian—who have many years of experience studying how best to measure achievement growth. The team used administrative data from Missouri to simulate the testing gap year that states face as a result of Covid-19, and to generate ideas about how to work through it. Using data from the 2016–17 through 2018–19 school years, they calculated growth over two years to determine how similar gap-year estimates are to the “business-as-usual” condition where testing data are available every year.
Their results speak to the feasibility of estimating two-year growth measures for districts and for schools, including technical suggestions for handling thorny data issues such as student mobility. The researchers also go on to assess the feasibility of growth measures when two years of test scores are missing, simulating the condition if spring 2021 testing is also cancelled.
There’s good news and bad news:
- Happily, both district- and school-level growth estimates based on a single-year gap convey information that’s very similar to growth estimates based on data with no gap year. Rankings of districts and schools only change slightly when a gap year in testing is simulated, and demographic factors such as race and socioeconomic status are not predictive of such changes. This analysis also suggests that such estimates will be valid for large subgroups such as economically disadvantaged students. We can’t say whether that will be the case for smaller subgroups.
- But there’s bad news, too. Just 27 percent of students attend schools that could generate growth measures if two consecutive years of tests are missing. That’s because most students in the standard testing window, grades three through eight, who were tested in 2019—the last time statewide assessments were implemented—will be in different schools by the spring of 2022. For example, third graders in 2019 will be sixth graders in 2022, which in most districts will make them middle schoolers. If we want to have any school-level measures of student progress in the near future, it’s vitally important that states assess students in 2021. (District growth measures will be doable, and relatively accurate, with another year of missing test data.)
In practical terms, what does this mean?
Calculating student growth measures from 2019 to 2021 is eminently feasible, and the results will be quite accurate—so long as states test students this year. Those measures will provide essential information to guide the educational recovery phase.
But if states cancel testing this year, too, it will be extremely difficult to determine how effective individual schools were during this challenging, historic period in American education. And of course, it will further delay the time until we can restart measuring student progress and holding schools accountable again.
To be sure, we understand the challenge of testing students during a pandemic. Though miraculous vaccines are offering light at the end of a long and dark tunnel, it’s hard to predict exactly how the next few months will unfold. Even if teachers are vaccinated and students are welcomed back for in-person learning (a big “if,” especially in big-city school systems), some families will likely want their children to remain home until they are vaccinated, too. And taking precious days out of the instructional calendar for testing this spring—just when schools can finally start to address students’ social and emotional needs, and significant learning losses—is a hard sell, even for testing-and-accountability hawks like us.
So allow me to make a humble suggestion, albeit one not proposed by the study’s authors: States might shift the spring 2021 assessments to fall 2021 when schools reopen—with luck for all students. This would allow states to compute those all-important student growth measures for the 2019–21 period, plus establish a baseline for student progress during the 2021–22 school year. To be sure, some states will be better equipped to manage this move than others, particularly those that don’t now legislatively mandate testing in the spring and that have enough internal capacity to acclimate schools effectively to new fall assessment schedules. And statistical adjustments will need to be made for summer learning loss.
Still, if states start now, they’ve got nine months to put revised policies in place. With mere weeks to throw together plans when Covid-19 first descended in March 2020, that should feel like a lifetime to state and local education officials today. And for all of the reasons stated above, it’s well worth the effort.