The closure of schools in response to the seismic disruptions of the Covid-19 pandemic has left an indelible mark on education worldwide. As nations grappled with closures lasting varying lengths of time, the implications for student learning became increasingly evident. Recent data from the OECD’s Programme for International Student Assessment (PISA) have shed some light on the extent of the damage and its potential economic repercussions.
As is well known, PISA is an international measure of the academic achievements of fifteen-year-old students that serves as a critical barometer of global education standards. Its latest results (from 2022), incorporating data from both pre- and post-Covid assessment rounds, provide a comprehensive view of the pandemic’s still-evolving impact on academic achievement.
The key finding is stark. Between 2018 and 2022, there was an average decline in scores of 14 percent of a standard deviation, equivalent to seven months of learning. And that’s after controlling for pre-Covid trends! Because of the wide reach of the PISA assessments—175 million students in seventy-two countries—this illustrates the global nature of the pandemic’s negative effect on academic achievement. These results go beyond raw scores, as they have been revised using data over time and after taking into account deviations from the long-run mean. They are also consistent with many national studies, internationalcomparisons, and other reviews of actual learning losses.
The extent of these losses in each country varied significantly depending on how long their schools were closed (see figure 1). Countries where schools were closed for shorter periods experienced relatively minor losses, whereas losses of up to a year’s worth of learning were observed in those countries with the longest closures. Immigrant students faced bigger setbacks, except in countries with longer closures where the learning loss for students with an immigrant background was similar to that of native-born students.
Figure 1. Learning loss depending on the length of school closure
Note: Losses shown are percent of a standard deviation (SD); in this case, 20 points are 0.20 SD; about 0.25 SDs are equal to a year's worth of learning.
The PISA data also show differences in learning losses among students with different levels of performance (see figure 2). In countries with school closures of average duration—about 5.5 months—learning losses were similar for low-, average-, and high-achieving students. However, in countries with shorter closures, the best students experienced minimal setbacks, with the learning losses mostly being incurred by average- and low-achieving students. In countries with longer closures, the largest learning losses were experienced by high-achieving students.
While this seems counter-intuitive, differences in learning loss at different achievement levels in countries with short and long closures can be associated with differences in the overall achievement in these countries. Countries with the longest closures are also countries with the lowest achievement in PISA, while countries at the top of the PISA rankings closed schools for much shorter periods, on average. Thus, we see larger losses among the lowest-achieving students in countries with short closures and high achievement, similar losses across the achievement distribution in countries with the average length of closures, and larger losses among the highest achieving students in countries with very long closures and low achievement. This is what is shown in Figure 2. Nevertheless, losses are greater for the lowest achievers.
Figure 2. Learning loss estimates depending on student achievement quantiles and the length of closures
These results are consistent with a recent review of grade four student reading scores in the Progress in International Reading Literacy Study (PIRLS). PIRLS is conducted by the International Association for the Evaluation of Educational Achievement (IEA). It measures reading proficiency of nine- to ten-year-olds. It has been conducted every five years since 2001. We used comparable data from fifty-five countries or regions from assessment rounds in 2001, 2006, 2011, 2016, and 2021. Our sample includes more than 1 million students participating in all rounds. The review showed that, in countries with relatively longer school closures, actual achievement in those schools that closed for more than eight weeks was lower than expected by 34 points, equivalent to more than a year of schooling. Learning losses were greater for those from schools closed for longer than average, while lower-achieving students experienced much larger educational losses than their peers.
Countries with no school closures achieved the same results as might have been expected based on their previous levels of achievement. This was the case in Sweden, where primary schools were never closed. Also, countries such as Denmark, Singapore, and a few other East Asian countries where schools were closed for only short periods and actions were taken to maximize the effectiveness of online education also experienced little or no learning loss. Such actions included giving schools extra resources for teaching and student well-being efforts
The consequences of the learning losses stemming from the Covid-19 school closures extend beyond the academic realm. The loss of human capital among the current generation of students will have enduring economic implications, both for the students themselves and for their countries. When they enter the labor market, their earnings will be lower than would have been the case in the absence of the learning losses, which will constrain their countries’ productivity, economic output, and growth and development.
The results from PISA and PIRLS are unequivocal: A crisis in education is has arrived, and low-achieving students are being disproportionately affected. As we navigate these challenges going forward, it will be imperative to make concerted efforts to find evidence-based strategies to mitigate the damage done by the pandemic closures and to enhance students’ learning outcomes. The time to act is now, for the sake of both current and future generations.
Editor’s note: A portion of this essay is excepted from the author’s Substack, The Education Daly.
I recently examined the rise in teacher absenteeism post-pandemic. I concluded that it’s a serious threat to learning recovery, and that it reflects broader shifts in the teacher labor market.
I dug deeper recently by inquiring about the problem in two affluent Chicago-area school districts—Hinsdale, which has seen a sharp rise in absenteeism rates, and Evanston, which surprisingly has reported a steep drop.
I was surprised when neither district confirmed the accuracy of the data they had previously submitted to the state. Instead, they responded with a combination of ambiguity and silence. This, I realized, is becoming familiar. In many ways, it epitomizes our failed pandemic recovery in which schools are stuck in a downward spiral of lost purpose. Teacher absenteeism is just one facet of a broader, grimmer reality.
Our education system is struggling in its entirety. The bounce-back we once envisioned—a fiery national mobilization to overcome learning setbacks—has eluded us. It’s depressingly easy to list the evidence:
More families are choosing to have their children skip kindergarten.
Against this backdrop, higher teacher absenteeism is predictable. It would be shocking if it had not increased. No group, from students to parents to teachers to administrators, feels the same connection to our schools that they once did. No matter how many times we try to psych ourselves up that now, finally, we’re going to cue the Rocky theme music and sprint up the stairs to the Philly art museum, all we have is false starts. We have districts struggling to count the number of days teachers miss.
This, ultimately, is why I suspect the issue has gotten so little attention. It’s depressing. We don’t know how to solve it. Nobody wants to make teachers feel bad after all they’ve been through. I share that feeling.
But I also believe this: If we don’t start talking openly about the failure of our pandemic recovery, we will be slipping and sliding for another generation. It’s a crisis.
Teacher absenteeism will normalize when everything else normalizes—when our schools regain their core sense of purpose.
Here are some steps that might help:
Let’s officially move forward. It’s been almost exactly four years since kids nationwide were sent home during the first wave of Covid. A full Olympiad. After all that time, there’s still residue in our schools that needs cleaning out. We need to move on—collectively. I don’t mean we should abandon health mitigation measures and leave those with vulnerable immune systems to fend for themselves. I’m talking about the psychological end of the pandemic’s hold on schools. A national day of remembrance for all those lost during the pandemic. A chance to appreciate everyone from health care providers to neighbors who pulled together to help us get to the other side. A true holiday. And then, a new chapter where schools (and state leaders) re-embrace norms around attendance, engagement, and achievement. It’s time.
Set some goals. What would it mean for our schools to “recover” from the pandemic? What’s the finish line? Have you seen anyone define it? Lack of clarity about what we’re trying to achieve and by when is impairing our progress. As calls increase for another infusion of federal funding, it’s important that new resources be paired with requirements for states to clearly articulate their targets and timelines for Covid recovery. We need a plan here.
Focus on first principles at the school level. Some schools lost the thread somewhere in the fog of Covid—understandably. But now, their attention is all over the place. Folks are tired. Instead of staying the course with more professional development sessions on differentiating instruction, such schools should take time getting clear about their basic goals. Back to square one. Physical health and mental well-being have required outsized attention during this era—for good reason. However, we need to refocus on our academic mission. There’s no shame in high standards for our students and high expectations for our educators. Kids can do homework. They can study for tests. They can write essays. They can stay off social media during class. They can submit science fair projects. They can show up five days a week. If we treat our students—particularly those from lower-income backgrounds—as though they are so damaged by the pandemic that they can’t possibly meet real challenges, they won’t. We’ve learned that the hard way.
Restore state-level accountability. States hit pause on gathering key information and using it to intervene with struggling schools. They had no choice. But some states have waited too long to resume healthy oversight. Data-rich states like Illinois have a head start. However, when districts aren’t accurately reporting basic data like teacher attendance and it isn’t being flagged and addressed through routine quality controls, it’s probably a sign that state agencies can play a more assertive role. Let’s give the public confidence that our schools can execute.
Sell the value of education. It’s no longer a self-evident proposition for anyone involved. Why does this enterprise warrant so much of our attention and funding? We desperately need to invest in our young people. But taxpayers will be increasingly skeptical if all they read is headlines about kids and teachers not showing up. States—especially those with declining enrollment—should be preparing now. More importantly, our young people aren’t buying in. What’s being done to change that? Until we win them over, we’re stuck.
All of these things are related. Teachers will stop missing days when schools are exciting, vibrant, successful places where they want to spend their time. Students will stop missing days at exactly the same time. It’s been four years. The clock’s ticking.
Massachusetts governor Maura Healey expressed opposition to the abolition of her state’s graduation exam. —Boston Globe
The Montgomery County school board adopted knowledge-rich, phonics-focused English language arts curricula for elementary schools. —MoCo 360
A newly proposed bill in California would require schools to use science-of-reading-based curriculum and instruction. —The 74
Jeers
Stripped autonomy, increased workloads, and chaotic classrooms have created an unsustainable job for teachers that a few pay raises can’t fix. —Ben Stein, USA Today
While other districts have recognized the importance of content knowledge to literacy, a Denver charter school cut science classes to focus on reading intervention. —Chalkbeat
News stories featured in Gadfly Bites may require a paid subscription to read in full. Just sayin’.
Nationally-renowned and self-described curriculum evangelist Karen Vaites took a look at the results of Ohio’s first-ever reading curriculum audit, which hit the news a couple of weeks ago. She uses Aaron Churchill’s initial analysis of the findings as the basis for her discussion. Vaites seems pessimistic about the state of play and raises question about the use of EdReports ratings in Ohio and elsewhere. “It’s time for states to show a lot more savvy in this realm,” she concludes, “and for literacy advocates to come off ‘curriculum-agnostic’ perches to proclaim it clearly: some curricula are vastly better than others, and our teachers deserve the best.” More to come, I reckon. (eduvaites, 3/25/24)
Mt. Healthy City Schools is facing a big deficit and its elected and appointed leaders are trying to get a handle on it. Nothing unusual there, but their efforts are somewhat hampered by a seemingly ever-changing deficit amount (from $700K in July, to $0 in August, to $5 million in November, to $7.5 million earlier this week). I’m sure it will all work out eventually with a mix of cuts, austerity, and additional revenue. (I mean, this isn’t a charter or private school that would just have to shut down if facing such overwhelming financial problems, amiright?) Providing an additional amount of bemusement for me is the elected school board begging to go under the harshest state fiscal oversight possible to try and get as much help as they can to right their ship. (Cincinnati Enquirer, 3/24/24)
Recently, Ohio policymakers have been mulling making changes to the state’s attendance tracking framework. It wouldn’t be the first time they’ve done so. In 2016, they overhauled student attendance and absenteeism policies via House Bill 410. Among its many provisions, this legislation transitioned the state’s definition of chronic absenteeism from days to hours. Rather than counting how many days of school students missed in a year, the state now requires districts to track the number of hours.
This was a critical change, as it made Ohio’s student attendance data much more accurate. Previously, when schools tracked attendance by days, students could miss significant chunks of instructional time—say, for a morning doctor’s appointment or a family emergency in the afternoon—and still be marked present for the entire day. In the case of infrequent medical appointments or family emergencies, such imprecise tracking isn’t a big deal. But in other instances, it is a big deal. For elementary students who miss the first hour of class twice a week because they’re late to school, or high schoolers who regularly skip their final class of the day because they’re just not feeling it, that time adds up. Tracking attendance by hours makes it possible for educators and parents to recognize the cumulative impact of seemingly small absences and then work to address them.
HB 410 didn’t change the definition of chronic absenteeism just for the sake of data transparency, though. The change aligned attendance policies with the state’s instructional requirements, which were also transitioning from days to hours. Previously, Ohio districts were required to be open for a certain number of days during each school year. To accommodate emergencies like snowstorms or water main breaks, administrators were provided with five “calamity days,” during which they could cancel classes without being required to offer students makeup instructional time. By shifting from days to hours, districts no longer needed calamity days. Instead, they could schedule “excess” hours above the minimum number of hours required by law, and hours missed above the minimum did not have to be made up.
Although this was a well-intentioned reform, it had an unexpected downside. “Excess hours” permitted districts to cancel class or alter their school schedules for questionable reasons. The latest and most ridiculous example is “eclipse fever,” which my colleague Jeff Murray recently discussed. He notes that the total solar eclipse that will occur on April 8 is a “stunning astronomical phenomenon” that offers schools a once-in-a-lifetime opportunity to provide students with firsthand science education. Rather than take advantage of this opportunity, however, district leaders across the state have decided to “close their entire districts for the whole day and provide zero educational opportunities whatsoever.” Their reasons range from potential traffic backups and Wi-Fi outages to safety concerns. Upon closer inspection, many of these reasons ring hollow, particularly because kids won’t only be missing out on a rare learning opportunity in science. They’ll be missing out on reading, math, and history, too.
Far more damaging is the spread of four-day school weeks. Over the last few years, this idea has gained traction nationwide. In Missouri, for example, 144 districts operated on a four-day schedule in 2023, adding up to more than 27 percent of the state’s total number of districts. Because the Buckeye State tracks instruction by hours instead of days, it would be fairly easy for Ohio districts to also make this jump. All they need to do is tack on a few instructional hours to the first four days of the week, and they can skip the fifth. In the last year, at least one Ohio district and one charter school have instituted four-day weeks. The transition drew nationalattention, and plenty of other Ohio districts are eagerly watching to see if they, too, should make the switch.
Administrators typically cite cost savings, teacher recruitment and retention, and improving school climate and attendance as reasons for this shift. The problem, however, is that the research on shorter weeks doesn’t live up to the hype. Cost savings are only about two percent, on average. A 2021 RAND report found that, although teachers viewed a four-day week as a “perk,” most said it was not a factor in deciding to work for their district, and the impact on retention depended on local context. Evidence is mixed on whether shorter weeks improve school climate and student behavior. Studies have not found any effect on attendance rates. But many studies do find negative impacts on achievement that are “roughly equivalent to a student being two to seven weeks behind where they would have been if they had stayed on a five-day week.”
What does all this mean for lawmakers who are considering if and how to revamp Ohio’s attendance tracking framework? Two things.
First, when it comes to tracking attendance for individual students, Ohio needs to keep the focus on hours. Given the lingering effects of the pandemic on student learning, as well as the notable negative impacts of chronic absenteeism, kids can’t afford to miss school. When they do miss class, educators and families need to be able to determine exactly how much time was missed, so they can track the cumulative impact and intervene when necessary. The best way to ensure that teachers and parents have this information is to maintain accurate hours-based determinations for chronic absenteeism, excessive absences, and habitual truancy. Otherwise, too many absences—and too many kids—can fall through the cracks.
Second, lawmakers should consider reverting back to requiring a minimum number of days that districts and schools must be open, while also identifying a minimum number of hours that must make up each day. Administrators would still be able to cancel classes when emergencies arise. But having a set number of calamity days, rather than an open-ended number of “excess hours,” would ensure that classes are only cancelled for true emergencies. Most importantly, districts would have to put a pause on shifting to four-day weeks. If, over the next few years, states like Missouri can show that four-day school weeks have significant positive impacts on student achievement, teacher recruitment and retention efforts, school climate, and district bottom lines, then Ohio lawmakers can reevaluate. But right now, the research isn’t promising on any of those fronts.
Tracking student attendance by hours, while tracking districts by hours and days, might give some folks pause. But the outsized importance of attendance on student outcomes makes this extra measure necessary. The state tried aligning the framework under days, and it didn’t work. Too many kids were missing too much class without anyone noticing. More recently, the state tried aligning under hours. But that didn’t work either. Districts opted to cancel classes for questionable reasons, and some have started to move toward scheduling changes that don’t appear to be in the best interest of kids. Now, all that’s left is to track students and districts by the measures that work best for each—hours for students, and hours plus days for districts.
How valuable is a bachelor’s degree? Less so than it used it be, says a new report, but the ultimate value depends on a number of factors, including tuition cost and college major.
A trio of researchers led by Liang Zhang of New York University focused on the internal rate of return (IRR) for students who graduated with a bachelor’s degree between 2009 and 2021, using data from the American Community Survey (ACS). 2009 was the first year ACS began collecting information on the majors in which students completed degrees. They limited the sample to individuals who were born in the United States and were eighteen to sixty-five years old, held either a high school diploma or a bachelor’s degree as their highest level of education, were not currently enrolled in school, and had positive earnings. Applying these criteria yielded a final sample of 5.8 million individuals with an even split of 2.9 million college graduates and 2.9 million high school graduates as the comparison group. The IRR calculation considers both the lifetime costs (e.g., tuition and forgone earnings) and benefits (e.g., higher earnings) of college to graduates by discounting future costs and benefits to their present value. One issue the researchers touch on is a potential mismatch between the ability levels of the two groups of students (A+ high schoolers vs. C- college grads). Without this specific data, they use “estimates from the existing literature” to adjust for the possible selection bias. Inexact, but at least on their minds.
First and foremost, they find that college degree completion still provides a solid return on investment compared to students with just high school degrees. Both median and mean earnings show an IRR between 9 and 10 percent. Male college graduates get a lower return than their female counterparts, but the difference is around three quarters of a percent. The analysts do note that similar research in the late 1980s showed a larger IRR. The researchers suggest this likely reflects both the increase in college costs in the intervening years and the flattening of wage growth generally following the Great Recession.
Additionally, IRR varies significantly depending on the college major a student pursues. Engineering and computer science majors are at the top (more than a 13 percent IRR)—with business, health, and math and science close behind. At the lower end are education, humanities, and the arts (below an 8 percent IRR). The researchers note a strong increase in degree completion among those higher-level majors over the timespan of their analysis, despite the overall reduction in college enrollment since 2010, which helps buoy the overall IRR findings.
The limitations noted by the researchers are small but important—including no differential impacts calculated based on the selectivity of colleges attended or of tax-related policies that can decrease earnings or reduce the final cost of college attendance. More important is the fact that the labor market of tomorrow may not follow the historical trends on display here. Ongoing technological advancements in robotics and artificial intelligence, as well as the increase in career-technical education opportunities in the middle and high school years, have the potential to upend all employment sectors in unpredictable ways.
As clear as these data are about the declining but still quite positive return on a college degree even as recently as 2021, the future for today’s degree earners is nowhere near as crystalline as that hindsight.
For more than twenty-five years, public charter schools have served Ohio families and communities by providing quality educational options beyond the local school district. But it’s no secret that we’ve also had a long-standing debate over whether increasing school choice impacts students who remain in traditional districts.
In important—and sometimes impassioned—discussions such as these, rigorous research is critical to ground conversations in facts and evidence.
Our latest report offers an analysis of the rapid scale-up of Ohio charter schools during the late 1990s and early 2000s. It finds that charters slightly boosted the graduation and attendance rates of traditional district students, while having no significant impacts on their state exam scores.
These results follow a body of research from various locales showing that expanding educational choice—whether via public charter schools or private schools—consistently yields neutral to slightly positive impacts on traditional districts.
Does expanding educational options harm traditional school districts? This question—a central one in the school choice debate—has been studied numerous times in various locales. Time and again, researchers have returned with a “no.” Choice programs do no harm to school districts, and in many instances even lead to improvements through what economists call “competitive effects.” Brian Gill of Mathematica Policy Research, for instance, reports that ten out of eleven rigorous studies on public charter schools’ effects on district performance find neutral to positive outcomes. Dozens of studies on private schools’ impacts on districts (including onesfrom Ohio) find similar results.
This research brief by the Fordham Institute’s Senior Research Fellow, Stéphane Lavertu, adds to the evidence showing that expanding choice options doesn’t hurt school districts. Here, Dr. Lavertu studies the rapid expansion of Ohio’s public charter schools in some (largely urban) districts during the early 2000s. He discovers that the escalating competition in these locales nudged districts’ graduation and attendance rates slightly upward, while having no discernable impacts on their state exam results.
Considered in conjunction with research showing that Ohio’s brick-and-mortar charters outperform nearby districts, we can now safely conclude that charters strengthen the state’s overall educational system. Charters directly benefit tens of thousands of students, provide additional school options to parents, and serve as laboratories for innovation—all at no expense to students who remain in the traditional public school system.
It’s time that we finally put to rest the tired canard that school choice hurts traditional public schools. Instead, let us get on with the work of expanding quality educational options, so that every Ohio family has the opportunity to select a school that meets their children’s individual needs.
Introduction
Compelling evidence continues to show that the emergence of charter schools has had a positive impact on public schooling. Recently, professors Feng Chen and Douglas Harris published a study in a prestigious economics journal that found that students attending public schools—both traditional and charter public schools—experienced improvements in their test scores and graduation rates as charter school attendance increased in their districts.[1] Based on further analysis, the authors conclude that the primary driver was academic improvement among students who attended charter schools (what we call “participatory effects”) though there were some benefits for students who remained in traditional public schools (due to charter schools’ “competitive effects”).
These nationwide results are consistent with what we know from state- and city-specific studies: Charter schools, on average, lead to improved academic outcomes among students who attend them and minimally benefit (but generally do not harm) students who remain in traditional public schools. The estimated participatory effects are also consistent with what we know about Ohio’s brick-and-mortar charter schools, which, on average, have increased the test scores and attendance rates of students who attend them.
Feng and Harris’s study provides some state-specific estimates of charter schools’ total impact (combined participatory and competitive effects) in supplementary materials available online, but those appendices report statistically insignificant estimates of the total effects of Ohio charter schools. How could there be no significant total effect, given what we know about the benefits of attending charter schools in Ohio? One possibility is that their data and methods have limitations that might preclude detecting effects in specific states. Another possibility, however, is that their null findings for Ohio are accurate and that charter schools’ impacts on district students are sufficiently negative that they offset the academic benefits for charter students.[2]
To set the record straight, we need to determine Ohio charter schools’ competitive effects—that is, their impact on students who remain in district schools. The novel analysis below—which addresses several limitations of Feng and Harris’s analysis[3]—indicates that although the initial emergence of charter schools had no clear competitive effects in terms of districtwide student achievement, there appear to have been positive impacts on Ohio districts’ graduation and attendance rates. Combined with what we know about Ohio charter schools’ positive participatory effects, the results of this analysis imply that the total impact of Ohio’s charter schools on Ohio public school students (those in both district and charter schools) has been positive.
There are limitations to this analysis. For methodological reasons, it focuses on the initial, rapid expansion of charter schools between 1998 and 2007. And although it employs a relatively rigorous design, how conclusively the estimated effects may be characterized as “causal” is debatable. But the research design is solid and, considered alongside strong evidence of the positive contemporary impacts of attending Ohio’s brick-and-mortar charter schools, it suggests that Ohio’s charter sector has had an overall positive impact on public schooling. Thus, the evidence indicates Ohio’s charter-school experience indeed tracks closely with the positive national picture painted by Chen and Harris’s reputable study.
Estimating the impact of charter school market share on Ohio school districts
A first step in estimating competitive effects is obtaining a good measure of charter school market share that captures significant differences between districts in terms of charter school growth.[4] Figure 1 illustrates the initially steep increase in the share of public school students in the average “Ohio 8” urban district[5] (from no charter school enrollment during the 1997–98 school year to nearly 14 percent enrollment during the 2006–07 school year) as well as the much more modest increase in the average Ohio district (nearly 2 percent of enrollment by 2006–07).[6] The rapid initial increase in some districts (like the Ohio 8) but not others provides a pronounced “treatment” of charter school competition that may be sufficiently strong to detect academic effects using district-level data.
Figure 1. Charter market share in Ohio districts
Focusing on the initial introduction of charter schools between 1998 and 2007 provides significant advantages. To detect competitive effects, one must observe a sufficient number of years of district outcomes so that those effects have time to set in. It can take time for districts to respond to market pressure, and there may be delays in observing changes in longer-term outcomes, such as graduation rates. On the other hand, it is important to isolate the impact of charter schools from those of other interventions (notably, No Child Left Behind, which led to sanctions that affected districts after the 2003–04 school year) and other events that affected schooling (notably, the EdChoice scholarship program and the Great Recession after 2007). Because these other factors may have disproportionately affected districts that were more likely to experience charter school growth, it is easy to misattribute their impact to charter competition. To address these concerns, the analysis focuses primarily on estimating the impact of the initial growth in charter enrollments on district outcomes three and four years later (e.g., the impact of increasing market share between 1998–2003 on outcomes from 2001–2007).
After creating a measure that captures significant differences between districts in initial charter school growth, the next step is to find academic outcome data over this timespan. Ohio’s primary measure of student achievement for the last two decades has been the performance index, which aggregates student achievement levels across various tests, subjects, and grades. It is a noisy measure, but it goes back to the 2000–01 school year and, thus, enables me to leverage the substantial 1998–2007 increase in charter school market share. In addition to performance index scores, I use graduation and attendance rates that appeared on Ohio report cards from 2002 to 2008 (which reflect graduation rates from 2000–01 to 2006–07 and include attendance rates from 2000–01 to 2006-07).[7]
Using these measures, I estimate statistical models that predict the graduation, attendance, and achievement of district students in a given year (from 2000–01 to 2006–07) based on historical changes in the charter school market share in that same district (from 1997–98 to 2006–07), and I compare these changes between districts that experienced different levels of charter school growth. Roughly, the analysis compares districts that were on similar academic trajectories from 2001 to 2007 but that experienced different levels of charter entry in prior years. A major benefit of this approach is that it essentially controls for baseline differences in achievement, attendance, and graduation rates between districts, as well as statewide trends in these outcomes over time. And, again, because impact estimates are linked to charter enrollments three and four years (or more) prior to the year in which we observe the academic outcomes, the results are driven by charter-school growth between 1998 and 2003—prior to the implementation of No Child Left Behind and EdChoice, and prior to the onset of the Great Recession.
Finally, I conducted statistical tests to assess whether the models are in fact comparing districts that were on similar trajectories but that experienced different levels of charter entry. First, I conducted “placebo tests” by estimating the relationship between future charter market shares and current achievement, attendance, and graduation levels in a district. Basically, if future market shares predict current academic outcomes, then the statistical models are not comparing districts that were on similar academic trajectories and thus cannot provide valid estimates of charter market share’s causal impact. I also tested the robustness of the findings to alternative graduation rate measures and the inclusion of various controls that capture potential confounders, such as changes in the demographic composition of students who remained in districts. The results remain qualitatively similar, providing additional support for the causal interpretation of the estimated competitive effects.[8]
Findings
Finding No. 1: A 1-percentage-point increase in charter school market share led to an increase in district graduation rates of 0.8 percentage points four years later. That implies that districts with a 10 percent charter market share had graduation rates 8 percentage points higher than they would have had in the absence of charter school competition.
I begin with the analysis of graduation rates. Figure 2 (below) plots the estimated impact of increasing charter market share by one percentage point on district-level graduation rates. Roughly, the thick blue line captures differences in achievement between districts that experienced a one percentage point increase in charter market share to those that did not experience an increase. Year 0 is the year of the market share increase, and the blue line to the right of 0 captures the estimated impact of an increased market share one, two, three, four, and five (or more) years later. The dotted lines are 95 percent confidence intervals, indicating that this interval would contain the estimate 95 percent of the time if the statistical test were repeated.
The results indicate that an increased charter market share had no impact on district graduation rates in the first couple of years. However, an increase in charter market share of 1 percentage point led to district graduation rates that, four years later, were 0.8 of a percentage point higher than they would have been in the absence of charter competition. Thus, if the average district had a charter market share of 10 percent in 2003, the results imply that they would have realized graduation rates that are 8 percentage points higher in 2007 (i.e., 0.8 x 10 four years later). For a typical Ohio 8 district that experienced a 14 percent increase in charter market share, that was the equivalent of going from a graduation rate of 57 percent to a graduation rate of 68 percent.
Figure 2. Impact of charter market share on districts’ graduation rates (2001–2007)
Importantly, as the estimates to the left of the y axis reveal, there are no statistically significant differences in graduation rates between districts that would go on to experience a 1-percentage-point increase in market share (in year 0) and those that would not go on to experience that increase. This is true one, two, three, four, and five (or more) years prior. Controlling for changes in districts’ student composition (e.g., free-lunch eligibility, race/ethnicity, disability status, and achievement levels) does not affect the results. Finally, although the estimates in Figure 1 are statistically imprecise (the confidence intervals are large), the Year 4 estimate is very close in magnitude to the statistically significant estimate (p<0.001) based on a more parsimonious specification that pools across years (see appendix Table B1). These results suggest that competition indeed had a positive impact on district students’ probability of graduation.
One potential limitation of this study is that the market share measure includes students enrolled in charter schools that are dedicated to dropout prevention and recovery. If students who were likely to drop out left district schools to attend these charter schools, then there would be a mechanical relationship between charter market share and district graduation rates. This dynamic should have a minimal impact on these graduation results, however. First, in order to explain the estimated effects that show up three and four years after charter market shares increase, districts would have needed to send students to dropout-recovery schools while they were in eighth or ninth grade (they couldn’t be in grades ten to twelve, as the dropout effects show up in Year 4); and these students needed to be ones who would go on to drop out in eleventh or twelfth grade (as opposed to grade nine or ten). That is a narrow set of potential students. Second, for this dynamic to explain the results (where a one-percentage-point increase in charter market share leads to an 0.8-percentage-point decrease in dropouts), then a large majority of the market share increase that districts experienced would need to be due to these students who would eventually drop out. Given the small proportion of charter students in dropout-recovery schools and the even smaller proportion of those who meet the required profile I just described, it seems that shipping students to charters focused on dropout prevention and recovery can be only a small part of the explanation.
Finding No. 2: A 1-percentage-point increase in charter school market share led to an increase in district attendance rates of 0.08 percentage points three years later. That implies that districts with a 10 percent charter market share had attendance rates 0.8 of a percentage point higher than they would have had in the absence of charter school competition.
The results for district attendance rates are also imprecise, with unstable point estimates and large confidence intervals in Years 4 and 5 (or later). But Figure 3 indicates a statistically significant effect in Year 3 of 0.08 percentage points, and this Year-3 estimate is very close in magnitude to the statistically significant estimate (p<0.01) based on a more parsimonious specification that pools across years (see appendix Table B1). For the typical Ohio 8 district, the estimated effect is the equivalent of their attendance rate going from 90.5 percent to 91.6 percent.
Figure 3. Impact of charter market share on districts’ attendance rates (2001–2007)
Thus, as was the case with graduation rates, these by-year estimates are imprecise, but they confirm more precise estimates from models that pool across years, provide evidence that there is a plausible time lag between increases in market share and increases in attendance rates, and provide some confidence that the results are not attributable to pre-existing differences between districts that experienced greater (as opposed to lesser) increases in charter competition. That the timing of attendance effects roughly corresponds to increases in graduation rates provides further support that the results don’t merely capture statistical noise.
Finding No. 3: An increase in charter school market share did not lead to a statistically significant change in districts’ scores on the performance index.
The results for districtwide student achievement indicate no statistically significant effects (see Figure 4, below). Unfortunately, we lack the statistical power to rule out effects that one might deem worthy of attention. Additionally, the immediate (statistically insignificant) decline in the performance index in the year of the market share increase (Year 0) might be attributable to relatively high-achieving students leaving for charter schools and thus might not capture changes in student learning. If high-achieving students were more likely to go to charter schools, then districts’ performance index scores should decline in exactly the year that charter market shares increased.[9]
Figure 4. Impact of charter market share on districts’ scores on the performance index (2001–2007)
The results of a simple model that pools across years indicates a negative relationship between charter market share and district performance index scores (see Table B1 in the appendix). The results in Figure 4, however, put into question this negative correlation between charter market share and district performance index scores. Controlling for future market share (as does the model used to generate Figure 4) renders statistically insignificant the estimates from Year 1 to Year 4. That the coefficient for five years (or more) prior is -0.04 and nearly statistically significant suggests that the relationship in Table B1 between market share and the performance index may be attributable to the fact that districts experiencing declines in achievement were more likely to subsequently experience charter school growth, as opposed to the other way around.[10] The estimate from the simple performance-index model that pools across years is also the only one that is not robust to limiting the analysis to pre-NCLB years (see Table B1 in the appendix).
Despite the somewhat imprecise (and perhaps invalid) statistical estimates of the impact of charter market share on districts’ performance index scores, what one can say is that the analysis rules out large declines in the achievement levels of district students. Additionally, these results are similar to those of a 2009 RAND study that found no statistically significant differences in student-level test score growth among students who attended a traditional public school that had a charter school in close proximity, as compared to students whose traditional public schools were farther from the nearest charter school. That study did not leverage the initial growth in the charter school sector, but it provides a different type of evidence and relatively precise estimates.
Thus, in spite of the potential limitations related to changes in student composition and imprecise (and perhaps invalid) statistical estimates, the results of this analysis provide one more piece of evidence that charter school competition did not have negative effects on student learning in district schools.
What can we learn from what happened from 1998 to 2007?
The introduction of charter schools in Ohio significantly disrupted school district operations. For example, in 2002, EdWeek documented Dayton Public Schools’s newfound dedication to academic improvement in response to its rapidly expanding charter sector. As Chester E. Finn, Jr. discussed in a post that same year, the district considered a number of reforms—notably the closure of under-enrolled and under-performing schools, which Feng and Harris’s recent study identified as the most likely mechanism explaining the positive impact of charter school competition on districtwide academic outcomes. The results above suggest that, for the average Ohio district experiencing charter school growth, these efforts did not yield large positive impacts on student achievement (though they very well may have in Dayton[11]), nor any discernable negative impacts.
On the other hand, the average Ohio district’s response to charter school competition led to increases in attendance and graduation rates. The more charter competition a district felt, the less likely their students were to miss school or drop out three or four years later. That charter school competition appears to have spurred improvements in Ohio school districts between 2001 and 2007 is particularly remarkable given how maligned Ohio’s charter sector was in those days. Charter schools were not nearly as effective in those early years as they are today (though the best evidence for that time period indicates that brick-and-mortar charter schools were no worse, on average, than district schools). Why that may have occurred is a topic for another day, but one wonders whether keeping students in school (and, thus, keeping the state funds that follow them) became more important to districts as they began to face competition. For now, though, the analysis above provides some further reassurance that it is worthwhile to draw attention to districts with solid charter market shares as an indicator of healthy school marketplaces.
About the author and acknowledgments
Stéphane Lavertu is a Senior Research Fellow at the Thomas B. Fordham Institute and Professor in the John Glenn College of Public Affairs at The Ohio State University. Any opinions or recommendations are his and do not necessarily represent policy positions or views of the Thomas B. Fordham Institute, the John Glenn College of Public Affairs, or The Ohio State University. He wishes to thank Vlad Kogan for his thoughtful critique and suggestions, as well as Chad Aldis, Aaron Churchill, and Mike Petrilli for their careful reading and helpful feedback on all aspects of the brief. The ultimate product is entirely his responsibility, and any limitations may very well be due to his failure to address feedback.
Endnotes
[1] An open-access version of the paper is available here, and an accessible summary of an earlier version of the paper is available here. These results are consistent with those of a prior Fordham study.
[2] Note that their analysis leaves out students in virtual charter schools and those serving special-education students, which suggests that the participant effects should be positive.
[3] The primary limitation of Chen and Harris’s analysis relates to their data. Their study measures important quantities with significant error (e.g., charter market share and graduation rates), does not exploit pronounced differences in charter school growth between districts (e.g., their achievement data begins in 2009, well after the initial and steep charter school growth I examine in my analysis), and focuses on years after the implementation of No Child Left Behind and the onset of the Great Recession (both of which disproportionately affected districts with growing charter sectors). These limitations likely make it difficult to detect effects in specific states, particularly states like Ohio, where the measurement error and lack of market-share variation is significant. I am not criticizing the quality of their valuable nationwide analysis. The data they use are the only option for conducting a rigorous nationwide analysis, as they need measures that are available across states. But when estimating Ohio-specific estimates of charter school effects, these limitations might preclude detecting effects because the signal-to-noise ratio is too low. I provide further details in the appendix.
[4] I thank Jason Cook for kindly sharing these data with me, which he collected for this study of charter competition’s impact on district revenues and expenditures. Note that Cook’s study estimates charter enrollment effects in the post-NCLB period, which may introduce some complications that my study seeks to avoid.
[5] The Ohio 8 districts are Akron, Canton, Cincinnati, Cleveland, Columbus, Dayton, Toledo, and Youngstown.
[6] Average market share increases more slowly and unevenly after 2007, as charter closures became more prevalent in districts with more mature charter sectors. Thus, although average enrollments continued to increase statewide through 2014, there is not a clean upward trajectory in charter market share in every district.
[7] These graduation rates are not as good as the cohort-based graduation rates introduced in later years, but they cover the same time span as the performance index and are based on calculations that account for actual enrollments and dropouts in every high school grade.
[8] Specifically, I estimated two-way fixed-effects panel models with lags and leads of district market share as predictor variables and 2001–2007 achievement, attendance, and graduation rate data as the dependent variables. Scholars have recently identified potential problems with these models, and there are concerns about the extent to which they capture “difference in differences” comparisons that warrant a causal interpretation, which is why I sometimes use qualifiers such as “roughly” when describing what the estimates of my analysis capture. The basic model includes district and year fixed effects, but the results are qualitatively similar when I control for time-varying demographics (e.g., student free-lunch eligibility, race/ethnicity, and disability status). These robustness checks, in conjunction with the use of leads that allow for placebo tests and control for potential differences in district trends, provide reassurance that the estimates are credible. The appendix contains a more precise description of the statistical modeling and results.
[9] Note that there is no estimated change in Year 0 for the attendance and graduation analyses, and if students more likely to attend school and graduate were the ones who switched to charters, that should have led to lower district attendance and graduation rates.
[10] Indeed, this potential explanation is consistent with the design of the charter school law, which in later years permitted the establishment of charter schools in districts that failed to reach performance designations (which were based in large part on the performance index).
[11] Unfortunately, Dayton is one of the handful of districts for which I am missing initial years of data, which means its 2002 efforts—in response to enrollment losses in the preceding two years—do not factor into the estimates above. Additionally, the statistical analysis cannot speak to the effects in a specific district.
Last year, Ohio lawmakers enacted bold reforms that push schools to follow the science of reading, an instructional method that teaches children to read via phonics and emphasizes background knowledge and vocabulary as pathways to strong comprehension. The overall package includes a requirement that schools use high-quality curricula aligned to the science of reading, with $64 million set aside for schools needing to purchase new instructional materials.
These provisions are critical, as researchindicates that effective curricula can drive achievement gains. Unfortunately, literacy experts have warned for years that too many schools use curricula embedded with ineffectual methods, most notably three-cueing, a technique that prompts children to guess at words instead of sounding them out. Reporting by Emily Hanford has cast light on two programs notorious for three-cueing: Lucy Calkins’ Units of Study and Irene Fountas and Gay Su Pinnell’s Classroom.
To better understand the curriculum landscape in the Buckeye State, lawmakers ordered the Ohio Department of Education and Workforce (DEW) to survey districts and charter schools about their reading programs. A survey was fielded last fall and received near-universal response rates. Earlier this month, the department released the results, giving us insight into which programs were used during the 2022–23 school year. Schools were not required to use high-quality curricula that year, so this a “pre-reform” picture. The requirement to use materials from a state-approved list (which is nearing finalization) begins in 2024–25.[1]
What do we learn from this survey? The short answer is a lot—and stay tuned for more analysis in a forthcoming Fordham report. But here are the top five things to know.
Takeaway 1: Roughly half of Ohio districts will likely need to overhaul their reading curricula in the next year.
By my count, 285 school districts reported using a core reading curricula that is currently on DEW’s approved list.[2] This leaves 320 districts needing to purchase and implement curricula that meets state requirements by next school year. These districts reported use of disproven curricula like Classroom or Units of Study, or other non-approved materials. They also include districts that reported using only supplemental materials—an issue we’ll return to later—as well as the thirty-four districts using only district-developed materials.[3] Given the survey was based on what schools used in the 2022–23 school year, it’s possible that some districts moved towards state-approved curricula in 2023–24; others may be using state-approved core curricula but neglected to report it. Even with those possibilities, it still appears that roughly half of Ohio districts will need to purchase new materials. As noted above, state legislators wisely set aside funds to do this, and those dollars should be released to schools in the coming months.
Takeaway 2: Several state-approved core curricula are already commonly used.
The key survey result is shown in table 1, which is direct from DEW’s report. It displays the ten most common responses to a survey question that asks schools about their “tier 1” (a.k.a. “core”) reading curricula for grades K–5.[4] When excluding three supplemental materials—noted by an asterisk and discussed in takeaway 5—we spot some good news, as a significant number of districts and charters use core materials already approved by DEW. These include McGraw Hill’s Reading Wonders with nearly 150 districts and charters citing use of one of its recent editions. Amplify’s highly-regarded Core Knowledge Language Arts program also cracks the top ten, with fifty-nine districts and charters using it, as well as Houghton Mifflin Harcourt’s Into Reading (forty-four). Another high-quality curricula, Great Minds’s Wit & Wisdom, narrowly missed the top ten with thirty-one districts and charters using it.
Table 1: Most commonly used reading curricula in Ohio, 2022–23
Takeaway 3: Dozens of districts and charters have been using ineffective curricula and will need to change course.
As for the bad stuff, we see that—again, excluding supplements—Fountas and Pinnell’s Classroom andCalkins’s Units of Study are the fourth and sixth most used core curricula in Ohio. Among districts, eight-eight of 605 used one of these two curricula (of these, seventeen reported both). Charter schools weren’t immune either, as twelve out of 222 elementary charters reported using one or both programs. To its credit, DEW has not approved Classroom or Units of Study. Houghton Mifflin Harcourt’s Journeys is also a popular program, but isn’t on the state-approved list. To my knowledge, Journeys has not been associated with three-cueing but does receive low marks on EdReports’s evaluations.
A closer look at the district-level results (available in a downloadable file) indicates that districts from all quarters of Ohio use Classroom and Units of Study. Yet suburban districts tend to use these programs more than others. This raises some dicey questions: Will affluent districts, which tend to have more political clout—plus higher test scores, given their demographics—push back on the new requirements? If so, how will policymakers respond? To head off potential grumbling, state leaders should be reaching out and reminding communities that all students—no matter their background—stand to benefit from effective reading instruction and strong, knowledge-rich curricula.
Table 2: Use of Fountas and Pinnell’s Classroom and Calkins’s Units of Study by district typology
Takeaway 4: The big-city districts are a mixed bag on reading curricula.
Students attending the Ohio Eight urban districts struggle most to achieve grade-level reading standards. It’s absolutely critical that their schools use effective reading curricula. The DEW reading survey finds signs of both hope and concern when we look at their programs. On the positive side, table 3 shows that Cincinnati, Cleveland, Columbus, Dayton, and Toledo report using core curricula that has been approved by the state. On the other hand, Akron and Youngstown did not report a core curriculum—they only reported supplemental materials—while Canton reported use of the non-approved Journeys. These districts should use this opportunity to select highly-regarded programs such as Core Knowledge, EL Education, or Wit & Wisdom, as several of their urban counterparts have already done.
Table 3: Core reading curricula used by the Ohio Eight urban districts
Takeaway 5: Materials categorized as supplemental were confusingly cited as core reading curricula.
As evident from table 1 above, Heggerty’s Phonemic Awareness and Wilson Language Training’s Fundations topped the list of reading curricula used in Ohio schools. But there’s a catch. Neither are core curricula—despite the table’s title—and neither is Ready Reading, which appears further down the list.Instead, all three are supplemental materials that provide extra support beyond the core reading curriculum.[5] In fact, most (though not all) districts citing use of Phonemic Awareness and Fundations report using another core curriculum. Thus, if one follows the main table in the DEW report, a somewhat distorted picture of curricula emerges, as hundreds of districts reported supplements as core materials.
* * *
With survey results in hand, we now have a sense of just how heavy the science of reading implementation lift will be. For some, it might be relief that only half of districts require a curriculum overhaul. It could’ve been worse! Yet it’s still a tall task to order hundreds of schools to change course. Despite the challenge, state lawmakers are right: There’s no reason for schools to continue using disproven reading curricula. The stakes for children are too high.
[1] DEW’s current list of approved reading curricula is available here. Appeals are still being processed, and a final list is expected at the end of March.
[2] In its coverage of the survey results, Cleveland.com mistakenly reported that 93 percent of districts use materials that are on DEW’s list. That percentage is the number of districts reporting use of any type of published curricula (as opposed to district-developed), regardless of whether it’s on the state-approved list.
[3] On the charter school side, sixty-eight out of 222 elementary charters reported use of a state-approved core reading curricula.
[4] The exact wording was: “During the 2022–2023 school year, which K–5 English language arts instructional materials were primarily used by the district or school for Tier 1 instruction?”
[5] The Colorado Department of Education—a national leader in adopting high-quality materials—categorizes all three programs supplemental materials; DEW categorizes Fundations as supplement, while the other two programs do not currently appear on its state-approved lists.