Recently, Ohio policymakers have been mulling making changes to the state’s attendance tracking framework. It wouldn’t be the first time they’ve done so. In 2016, they overhauled student attendance and absenteeism policies via House Bill 410.
Recently, Ohio policymakers have been mulling making changes to the state’s attendance tracking framework. It wouldn’t be the first time they’ve done so. In 2016, they overhauled student attendance and absenteeism policies via House Bill 410. Among its many provisions, this legislation transitioned the state’s definition of chronic absenteeism from days to hours. Rather than counting how many days of school students missed in a year, the state now requires districts to track the number of hours.
This was a critical change, as it made Ohio’s student attendance data much more accurate. Previously, when schools tracked attendance by days, students could miss significant chunks of instructional time—say, for a morning doctor’s appointment or a family emergency in the afternoon—and still be marked present for the entire day. In the case of infrequent medical appointments or family emergencies, such imprecise tracking isn’t a big deal. But in other instances, it is a big deal. For elementary students who miss the first hour of class twice a week because they’re late to school, or high schoolers who regularly skip their final class of the day because they’re just not feeling it, that time adds up. Tracking attendance by hours makes it possible for educators and parents to recognize the cumulative impact of seemingly small absences and then work to address them.
HB 410 didn’t change the definition of chronic absenteeism just for the sake of data transparency, though. The change aligned attendance policies with the state’s instructional requirements, which were also transitioning from days to hours. Previously, Ohio districts were required to be open for a certain number of days during each school year. To accommodate emergencies like snowstorms or water main breaks, administrators were provided with five “calamity days,” during which they could cancel classes without being required to offer students makeup instructional time. By shifting from days to hours, districts no longer needed calamity days. Instead, they could schedule “excess” hours above the minimum number of hours required by law, and hours missed above the minimum did not have to be made up.
Although this was a well-intentioned reform, it had an unexpected downside. “Excess hours” permitted districts to cancel class or alter their school schedules for questionable reasons. The latest and most ridiculous example is “eclipse fever,” which my colleague Jeff Murray recently discussed. He notes that the total solar eclipse that will occur on April 8 is a “stunning astronomical phenomenon” that offers schools a once-in-a-lifetime opportunity to provide students with firsthand science education. Rather than take advantage of this opportunity, however, district leaders across the state have decided to “close their entire districts for the whole day and provide zero educational opportunities whatsoever.” Their reasons range from potential traffic backups and Wi-Fi outages to safety concerns. Upon closer inspection, many of these reasons ring hollow, particularly because kids won’t only be missing out on a rare learning opportunity in science. They’ll be missing out on reading, math, and history, too.
Far more damaging is the spread of four-day school weeks. Over the last few years, this idea has gained traction nationwide. In Missouri, for example, 144 districts operated on a four-day schedule in 2023, adding up to more than 27 percent of the state’s total number of districts. Because the Buckeye State tracks instruction by hours instead of days, it would be fairly easy for Ohio districts to also make this jump. All they need to do is tack on a few instructional hours to the first four days of the week, and they can skip the fifth. In the last year, at least one Ohio district and one charter school have instituted four-day weeks. The transition drew nationalattention, and plenty of other Ohio districts are eagerly watching to see if they, too, should make the switch.
Administrators typically cite cost savings, teacher recruitment and retention, and improving school climate and attendance as reasons for this shift. The problem, however, is that the research on shorter weeks doesn’t live up to the hype. Cost savings are only about two percent, on average. A 2021 RAND report found that, although teachers viewed a four-day week as a “perk,” most said it was not a factor in deciding to work for their district, and the impact on retention depended on local context. Evidence is mixed on whether shorter weeks improve school climate and student behavior. Studies have not found any effect on attendance rates. But many studies do find negative impacts on achievement that are “roughly equivalent to a student being two to seven weeks behind where they would have been if they had stayed on a five-day week.”
What does all this mean for lawmakers who are considering if and how to revamp Ohio’s attendance tracking framework? Two things.
First, when it comes to tracking attendance for individual students, Ohio needs to keep the focus on hours. Given the lingering effects of the pandemic on student learning, as well as the notable negative impacts of chronic absenteeism, kids can’t afford to miss school. When they do miss class, educators and families need to be able to determine exactly how much time was missed, so they can track the cumulative impact and intervene when necessary. The best way to ensure that teachers and parents have this information is to maintain accurate hours-based determinations for chronic absenteeism, excessive absences, and habitual truancy. Otherwise, too many absences—and too many kids—can fall through the cracks.
Second, lawmakers should consider reverting back to requiring a minimum number of days that districts and schools must be open, while also identifying a minimum number of hours that must make up each day. Administrators would still be able to cancel classes when emergencies arise. But having a set number of calamity days, rather than an open-ended number of “excess hours,” would ensure that classes are only cancelled for true emergencies. Most importantly, districts would have to put a pause on shifting to four-day weeks. If, over the next few years, states like Missouri can show that four-day school weeks have significant positive impacts on student achievement, teacher recruitment and retention efforts, school climate, and district bottom lines, then Ohio lawmakers can reevaluate. But right now, the research isn’t promising on any of those fronts.
Tracking student attendance by hours, while tracking districts by hours and days, might give some folks pause. But the outsized importance of attendance on student outcomes makes this extra measure necessary. The state tried aligning the framework under days, and it didn’t work. Too many kids were missing too much class without anyone noticing. More recently, the state tried aligning under hours. But that didn’t work either. Districts opted to cancel classes for questionable reasons, and some have started to move toward scheduling changes that don’t appear to be in the best interest of kids. Now, all that’s left is to track students and districts by the measures that work best for each—hours for students, and hours plus days for districts.
Last year, Ohio lawmakers enacted bold reforms that push schools to follow the science of reading, an instructional method that teaches children to read via phonics and emphasizes background knowledge and vocabulary as pathways to strong comprehension. The overall package includes a requirement that schools use high-quality curricula aligned to the science of reading, with $64 million set aside for schools needing to purchase new instructional materials.
These provisions are critical, as researchindicates that effective curricula can drive achievement gains. Unfortunately, literacy experts have warned for years that too many schools use curricula embedded with ineffectual methods, most notably three-cueing, a technique that prompts children to guess at words instead of sounding them out. Reporting by Emily Hanford has cast light on two programs notorious for three-cueing: Lucy Calkins’ Units of Study and Irene Fountas and Gay Su Pinnell’s Classroom.
To better understand the curriculum landscape in the Buckeye State, lawmakers ordered the Ohio Department of Education and Workforce (DEW) to survey districts and charter schools about their reading programs. A survey was fielded last fall and received near-universal response rates. Earlier this month, the department released the results, giving us insight into which programs were used during the 2022–23 school year. Schools were not required to use high-quality curricula that year, so this a “pre-reform” picture. The requirement to use materials from a state-approved list (which is nearing finalization) begins in 2024–25.[1]
What do we learn from this survey? The short answer is a lot—and stay tuned for more analysis in a forthcoming Fordham report. But here are the top five things to know.
Takeaway 1: Roughly half of Ohio districts will likely need to overhaul their reading curricula in the next year.
By my count, 285 school districts reported using a core reading curricula that is currently on DEW’s approved list.[2] This leaves 320 districts needing to purchase and implement curricula that meets state requirements by next school year. These districts reported use of disproven curricula like Classroom or Units of Study, or other non-approved materials. They also include districts that reported using only supplemental materials—an issue we’ll return to later—as well as the thirty-four districts using only district-developed materials.[3] Given the survey was based on what schools used in the 2022–23 school year, it’s possible that some districts moved towards state-approved curricula in 2023–24; others may be using state-approved core curricula but neglected to report it. Even with those possibilities, it still appears that roughly half of Ohio districts will need to purchase new materials. As noted above, state legislators wisely set aside funds to do this, and those dollars should be released to schools in the coming months.
Takeaway 2: Several state-approved core curricula are already commonly used.
The key survey result is shown in table 1, which is direct from DEW’s report. It displays the ten most common responses to a survey question that asks schools about their “tier 1” (a.k.a. “core”) reading curricula for grades K–5.[4] When excluding three supplemental materials—noted by an asterisk and discussed in takeaway 5—we spot some good news, as a significant number of districts and charters use core materials already approved by DEW. These include McGraw Hill’s Reading Wonders with nearly 150 districts and charters citing use of one of its recent editions. Amplify’s highly-regarded Core Knowledge Language Arts program also cracks the top ten, with fifty-nine districts and charters using it, as well as Houghton Mifflin Harcourt’s Into Reading (forty-four). Another high-quality curricula, Great Minds’s Wit & Wisdom, narrowly missed the top ten with thirty-one districts and charters using it.
Table 1: Most commonly used reading curricula in Ohio, 2022–23
Takeaway 3: Dozens of districts and charters have been using ineffective curricula and will need to change course.
As for the bad stuff, we see that—again, excluding supplements—Fountas and Pinnell’s Classroom andCalkins’s Units of Study are the fourth and sixth most used core curricula in Ohio. Among districts, eight-eight of 605 used one of these two curricula (of these, seventeen reported both). Charter schools weren’t immune either, as twelve out of 222 elementary charters reported using one or both programs. To its credit, DEW has not approved Classroom or Units of Study. Houghton Mifflin Harcourt’s Journeys is also a popular program, but isn’t on the state-approved list. To my knowledge, Journeys has not been associated with three-cueing but does receive low marks on EdReports’s evaluations.
A closer look at the district-level results (available in a downloadable file) indicates that districts from all quarters of Ohio use Classroom and Units of Study. Yet suburban districts tend to use these programs more than others. This raises some dicey questions: Will affluent districts, which tend to have more political clout—plus higher test scores, given their demographics—push back on the new requirements? If so, how will policymakers respond? To head off potential grumbling, state leaders should be reaching out and reminding communities that all students—no matter their background—stand to benefit from effective reading instruction and strong, knowledge-rich curricula.
Table 2: Use of Fountas and Pinnell’s Classroom and Calkins’s Units of Study by district typology
Takeaway 4: The big-city districts are a mixed bag on reading curricula.
Students attending the Ohio Eight urban districts struggle most to achieve grade-level reading standards. It’s absolutely critical that their schools use effective reading curricula. The DEW reading survey finds signs of both hope and concern when we look at their programs. On the positive side, table 3 shows that Cincinnati, Cleveland, Columbus, Dayton, and Toledo report using core curricula that has been approved by the state. On the other hand, Akron and Youngstown did not report a core curriculum—they only reported supplemental materials—while Canton reported use of the non-approved Journeys. These districts should use this opportunity to select highly-regarded programs such as Core Knowledge, EL Education, or Wit & Wisdom, as several of their urban counterparts have already done.
Table 3: Core reading curricula used by the Ohio Eight urban districts
Takeaway 5: Materials categorized as supplemental were confusingly cited as core reading curricula.
As evident from table 1 above, Heggerty’s Phonemic Awareness and Wilson Language Training’s Fundations topped the list of reading curricula used in Ohio schools. But there’s a catch. Neither are core curricula—despite the table’s title—and neither is Ready Reading, which appears further down the list.Instead, all three are supplemental materials that provide extra support beyond the core reading curriculum.[5] In fact, most (though not all) districts citing use of Phonemic Awareness and Fundations report using another core curriculum. Thus, if one follows the main table in the DEW report, a somewhat distorted picture of curricula emerges, as hundreds of districts reported supplements as core materials.
* * *
With survey results in hand, we now have a sense of just how heavy the science of reading implementation lift will be. For some, it might be relief that only half of districts require a curriculum overhaul. It could’ve been worse! Yet it’s still a tall task to order hundreds of schools to change course. Despite the challenge, state lawmakers are right: There’s no reason for schools to continue using disproven reading curricula. The stakes for children are too high.
[1] DEW’s current list of approved reading curricula is available here. Appeals are still being processed, and a final list is expected at the end of March.
[2] In its coverage of the survey results, Cleveland.com mistakenly reported that 93 percent of districts use materials that are on DEW’s list. That percentage is the number of districts reporting use of any type of published curricula (as opposed to district-developed), regardless of whether it’s on the state-approved list.
[3] On the charter school side, sixty-eight out of 222 elementary charters reported use of a state-approved core reading curricula.
[4] The exact wording was: “During the 2022–2023 school year, which K–5 English language arts instructional materials were primarily used by the district or school for Tier 1 instruction?”
[5] The Colorado Department of Education—a national leader in adopting high-quality materials—categorizes all three programs supplemental materials; DEW categorizes Fundations as supplement, while the other two programs do not currently appear on its state-approved lists.
Last spring, state officials published data indicating some worrying signs regarding the future of Ohio’s teacher workforce. While Ohio lawmakers took advantage of the biennial state budget to enact several policies aimed at addressing these issues, there’s still plenty of work to be done.
Over the next several weeks, we’ll examine a plethora of potential ideas—many of which we’ve previously proposed in policy briefsandotheranalyses—as to how Ohio lawmakers can finish the job and effectively bolster the teacher workforce. First, we’ll start with the importance of gathering accurate data on teacher shortages.
Teacher shortages have been making headlinesfor years based on anecdotal reports from district and school administrators. According to research by the Center for American Progress, enrollment in teacher preparation programs nationally fell by more than one-third from 2010 to 2018, with Ohio posting a decline of nearly 50 percent. Ohio was one of nine states where the drop totaled more than 10,000 prospective teachers. State data released last year indicates that the number of newly licensed teachers has gradually declined since 2014.
To make matters even more complicated, teacher shortages vary from place to place and between grades and subject areas. Using teacher vacancy data from Tennessee, a paper published by Brown University’s Annenberg Institute for School Reform found that staffing issues are “highly localized.” That means it’s possible for teacher shortages and surpluses to exist simultaneously, and explains why some districts and schools are struggling to staff classrooms while others aren’t.
For Ohio, pinpointing the depth and breadth of shortages and where, exactly, they exist is crucial. State leaders can’t craft effective solutions unless they understand the full size and scope of the problem. That means understanding both teacher supply and demand. It’s not enough to know that the number of newly licensed teachers has declined or that attrition rates have gone up. We must also understand the specific teaching needs that schools have.
Unfortunately, Ohio doesn’t collect data on teacher demand. We don’t know the number of teaching positions that go unfilled each year, which leaves state leaders guessing as to how big of a hole they need to fill. There’s also no way of knowing how long it takes districts to fill open positions, what the candidate pool looks like, or whether schools have opted to just stop offering certain classes (like French or a career-technical education course) because they couldn’t find a teacher.
These are all details that must be taken into account when shaping policy because they require varying solutions. For example, if districts report high demand for math teachers but weak demand for history teachers, then it would be wise to beef up math teacher recruitment but unwise to commit resources to recruiting history teachers. Similarly, if the vast majority of districts report high demand for special education teachers, blanket teacher recruitment efforts likely aren’t going to cut it.
To put it another way, although Ohio leaders deserve kudos for trying to address teacher shortages, their efforts are too reliant on guesswork if we don’t have data on teacher demand. Even if state leaders blindly manage to hit a few targets, we won’t actually know they’ve been hit. Without accurate and annually updated data on demand, we have no way of knowing whether the policies and initiatives Ohio implements have actually succeeded in bolstering the teacher workforce.
With all this in mind, we offer two recommendations.
First, lawmakers should add a provision to state law directing the Ohio Department of Education and Workforce (DEW) to collect data about teacher demand and vacancies. Specifically, they could collect information from districts regarding the number of vacancies by school, grade level, and subject area; how long it takes to fill vacancies; the number of applicants for each vacancy; the number of vacancies filled by long-term substitutes; and any courses or subjects that were eliminated due to hiring difficulties or extended vacancies. Requiring the department to collect these data will make it easier to understand teacher shortages and will help ensure that policy solutions will be timely and effective.
Second, DEW should make teacher data more easily accessible to the public. One way to do so would be to create a distinct state dashboard that tracks both teacher supply and demand. ExcelinEd published a model policy for such a dashboard last year that Ohio lawmakers could incorporate into state law. Another option is to take a page out of North Carolina’s book and compile and publish an annual report about the state of the teaching profession. Both options would give state leaders the ability to track trends over time and pinpoint potential problem areas to head off future teacher shortages.
Obviously, data collection itself won’t solve labor shortages. But detailed information about teacher supply and demand will empower lawmakers to craft better, more effective policies. To help themselves down the road—and to ensure that Ohio schools have what they need—lawmakers need to move quickly to ensure they have quality information with which to make teacher policy changes.
How valuable is a bachelor’s degree? Less so than it used it be, says a new report, but the ultimate value depends on a number of factors, including tuition cost and college major.
A trio of researchers led by Liang Zhang of New York University focused on the internal rate of return (IRR) for students who graduated with a bachelor’s degree between 2009 and 2021, using data from the American Community Survey (ACS). 2009 was the first year ACS began collecting information on the majors in which students completed degrees. They limited the sample to individuals who were born in the United States and were eighteen to sixty-five years old, held either a high school diploma or a bachelor’s degree as their highest level of education, were not currently enrolled in school, and had positive earnings. Applying these criteria yielded a final sample of 5.8 million individuals with an even split of 2.9 million college graduates and 2.9 million high school graduates as the comparison group. The IRR calculation considers both the lifetime costs (e.g., tuition and forgone earnings) and benefits (e.g., higher earnings) of college to graduates by discounting future costs and benefits to their present value. One issue the researchers touch on is a potential mismatch between the ability levels of the two groups of students (A+ high schoolers vs. C- college grads). Without this specific data, they use “estimates from the existing literature” to adjust for the possible selection bias. Inexact, but at least on their minds.
First and foremost, they find that college degree completion still provides a solid return on investment compared to students with just high school degrees. Both median and mean earnings show an IRR between 9 and 10 percent. Male college graduates get a lower return than their female counterparts, but the difference is around three quarters of a percent. The analysts do note that similar research in the late 1980s showed a larger IRR. The researchers suggest this likely reflects both the increase in college costs in the intervening years and the flattening of wage growth generally following the Great Recession.
Additionally, IRR varies significantly depending on the college major a student pursues. Engineering and computer science majors are at the top (more than a 13 percent IRR)—with business, health, and math and science close behind. At the lower end are education, humanities, and the arts (below an 8 percent IRR). The researchers note a strong increase in degree completion among those higher-level majors over the timespan of their analysis, despite the overall reduction in college enrollment since 2010, which helps buoy the overall IRR findings.
The limitations noted by the researchers are small but important—including no differential impacts calculated based on the selectivity of colleges attended or of tax-related policies that can decrease earnings or reduce the final cost of college attendance. More important is the fact that the labor market of tomorrow may not follow the historical trends on display here. Ongoing technological advancements in robotics and artificial intelligence, as well as the increase in career-technical education opportunities in the middle and high school years, have the potential to upend all employment sectors in unpredictable ways.
As clear as these data are about the declining but still quite positive return on a college degree even as recently as 2021, the future for today’s degree earners is nowhere near as crystalline as that hindsight.
For many students and teachers, the pivot from in-person to remote learning in March 2020 was a sudden lurch from the known to the unknown. Writ large, research shows the academic impact of that move was devastating. But details matter—and so do exceptions. Little attention has been paid to students who were already familiar with working in remote modes prior to Covid-mitigation school closures. Is it possible that they fared better than their peers, experiencing what amounted to a simple extension of their status quo? New research out of Germany tries to answer that question.
The researchers followed the longitudinal performance of 2,700 students who studied mathematics using an intelligent tutoring system (ITS) called Bettermarks. It is a robust, adaptive online system that can be used in class or at home and covers typical math content taught in grades four through twelve, from basic addition and subtraction to probability and statistics. The “teaching” portion of Bettermarks is mostly overview rather than direct instruction and is primarily used to give students practice problem sets to complete. That said, the system is also interactive—programmed to give feedback when common errors are made (“don’t add the numerators and the denominators,” “find the lowest common denominator,” etc.), provide a finite number of brief hints when requested, and supply students with more practice problems of varying difficulty in areas where they display specific weaknesses. This ITS can be programmed to provide unlimited problem sets upon student demand or to limit available sets to those assigned by a teacher. For this study, students were only working on assigned sets and had to have completed a minimum of five sets in each of the time periods under review.
The analysis covered ITS usage between January 2017 and May 2021 and, significantly, only included students who used the system before, during, and after pandemic-related school closures in Germany. The goal was to get the fullest picture of usage and outcome changes from open schools to closed schools and back again. The analysts particularly focused on potential differential effects for lower- versus higher-performing students, as previous research showed inconclusive and potentially conflicting results for those students. Based on pre-closure performance on ITS problem sets between January 1, 2017, and March 15, 2020, analysts assigned students to either a low- or high-performing group for comparison.
During the first round of school closures in Germany (March 16 through May 31, 2020), all students using the ITS showed an increase in relative accuracy rates compared to their pre-closure performance. Higher-performing students exhibited the highest absolute accuracy in terms of raw scores, but initially-lower-performing students showed larger growth in accuracy in the period relative to the full sample, with their absolute accuracy rising quicker nearer the end of the period.
German schools reopened for in-person learning in the fall of 2020 but closed again on January 1, 2021, for two full months. Looking at ITS performance during this initial return-to-business-as-usual period, the researchers saw a similar pattern to the first closure period, with an overall increase in relative accuracy rates driven by an acceleration of absolute accuracy by initially-lower-performing students. Results during the second closure period (through February 28, 2021) showed a similar pattern. Finally, in-person learning returned for good as of March 1, 2021, and the researchers looked at ITS performance from this point through the end of the school year on May 31, 2021. Perhaps unsurprisingly at this point: The results were the same again.
What to make of this steady pattern of improvement? First and foremost is the evidence that these students were not negatively impacted by the switch from in-person to remote schooling, even through the repeated open-and-close cycles they experienced. In fact, the trajectory for all students in this study was entirely upward, with low-achievers gaining the most. Whatever was going on within their daily class instruction, these findings indicate that students remained engaged and productive. This is not ironclad proof that students became more adept at learning from online resources due to increased use during school closures, although that could certainly be part of the explanation for the results observed. Additional suggestions from the report’s authors include a decrease of “math anxiety” (especially in lower-performing students) due to the lack of face-to-face competition or judgment from peers, and the possibility that use of—and success in—the ITS was being “incentivized” by teachers who made scores part of students’ class grade.
In the end, these findings are part of the complex puzzle of what happened to students as a result of Covid-mitigation school closures. It’s important to know these things, even if we’re just trying to put the pieces together years later—especially the more positive parts of the picture, as small and localized as they may have been.
Does expanding educational options harm traditional school districts? This question—a central one in the school choice debate—has been studied numerous times in various locales. Time and again, researchers have returned with a “no.” Choice programs do no harm to school districts, and in many instances even lead to improvements through what economists call “competitive effects.” Brian Gill of Mathematica Policy Research, for instance, reports that ten out of eleven rigorous studies on public charter schools’ effects on district performance find neutral to positive outcomes. Dozens of studies on private schools’ impacts on districts (including onesfrom Ohio) find similar results.
This research brief by the Fordham Institute’s Senior Research Fellow, Stéphane Lavertu, adds to the evidence showing that expanding choice options doesn’t hurt school districts. Here, Dr. Lavertu studies the rapid expansion of Ohio’s public charter schools in some (largely urban) districts during the early 2000s. He discovers that the escalating competition in these locales nudged districts’ graduation and attendance rates slightly upward, while having no discernable impacts on their state exam results.
Considered in conjunction with research showing that Ohio’s brick-and-mortar charters outperform nearby districts, we can now safely conclude that charters strengthen the state’s overall educational system. Charters directly benefit tens of thousands of students, provide additional school options to parents, and serve as laboratories for innovation—all at no expense to students who remain in the traditional public school system.
It’s time that we finally put to rest the tired canard that school choice hurts traditional public schools. Instead, let us get on with the work of expanding quality educational options, so that every Ohio family has the opportunity to select a school that meets their children’s individual needs.
Introduction
Compelling evidence continues to show that the emergence of charter schools has had a positive impact on public schooling. Recently, professors Feng Chen and Douglas Harris published a study in a prestigious economics journal that found that students attending public schools—both traditional and charter public schools—experienced improvements in their test scores and graduation rates as charter school attendance increased in their districts.[1] Based on further analysis, the authors conclude that the primary driver was academic improvement among students who attended charter schools (what we call “participatory effects”) though there were some benefits for students who remained in traditional public schools (due to charter schools’ “competitive effects”).
These nationwide results are consistent with what we know from state- and city-specific studies: Charter schools, on average, lead to improved academic outcomes among students who attend them and minimally benefit (but generally do not harm) students who remain in traditional public schools. The estimated participatory effects are also consistent with what we know about Ohio’s brick-and-mortar charter schools, which, on average, have increased the test scores and attendance rates of students who attend them.
Feng and Harris’s study provides some state-specific estimates of charter schools’ total impact (combined participatory and competitive effects) in supplementary materials available online, but those appendices report statistically insignificant estimates of the total effects of Ohio charter schools. How could there be no significant total effect, given what we know about the benefits of attending charter schools in Ohio? One possibility is that their data and methods have limitations that might preclude detecting effects in specific states. Another possibility, however, is that their null findings for Ohio are accurate and that charter schools’ impacts on district students are sufficiently negative that they offset the academic benefits for charter students.[2]
To set the record straight, we need to determine Ohio charter schools’ competitive effects—that is, their impact on students who remain in district schools. The novel analysis below—which addresses several limitations of Feng and Harris’s analysis[3]—indicates that although the initial emergence of charter schools had no clear competitive effects in terms of districtwide student achievement, there appear to have been positive impacts on Ohio districts’ graduation and attendance rates. Combined with what we know about Ohio charter schools’ positive participatory effects, the results of this analysis imply that the total impact of Ohio’s charter schools on Ohio public school students (those in both district and charter schools) has been positive.
There are limitations to this analysis. For methodological reasons, it focuses on the initial, rapid expansion of charter schools between 1998 and 2007. And although it employs a relatively rigorous design, how conclusively the estimated effects may be characterized as “causal” is debatable. But the research design is solid and, considered alongside strong evidence of the positive contemporary impacts of attending Ohio’s brick-and-mortar charter schools, it suggests that Ohio’s charter sector has had an overall positive impact on public schooling. Thus, the evidence indicates Ohio’s charter-school experience indeed tracks closely with the positive national picture painted by Chen and Harris’s reputable study.
Estimating the impact of charter school market share on Ohio school districts
A first step in estimating competitive effects is obtaining a good measure of charter school market share that captures significant differences between districts in terms of charter school growth.[4] Figure 1 illustrates the initially steep increase in the share of public school students in the average “Ohio 8” urban district[5] (from no charter school enrollment during the 1997–98 school year to nearly 14 percent enrollment during the 2006–07 school year) as well as the much more modest increase in the average Ohio district (nearly 2 percent of enrollment by 2006–07).[6] The rapid initial increase in some districts (like the Ohio 8) but not others provides a pronounced “treatment” of charter school competition that may be sufficiently strong to detect academic effects using district-level data.
Figure 1. Charter market share in Ohio districts
Focusing on the initial introduction of charter schools between 1998 and 2007 provides significant advantages. To detect competitive effects, one must observe a sufficient number of years of district outcomes so that those effects have time to set in. It can take time for districts to respond to market pressure, and there may be delays in observing changes in longer-term outcomes, such as graduation rates. On the other hand, it is important to isolate the impact of charter schools from those of other interventions (notably, No Child Left Behind, which led to sanctions that affected districts after the 2003–04 school year) and other events that affected schooling (notably, the EdChoice scholarship program and the Great Recession after 2007). Because these other factors may have disproportionately affected districts that were more likely to experience charter school growth, it is easy to misattribute their impact to charter competition. To address these concerns, the analysis focuses primarily on estimating the impact of the initial growth in charter enrollments on district outcomes three and four years later (e.g., the impact of increasing market share between 1998–2003 on outcomes from 2001–2007).
After creating a measure that captures significant differences between districts in initial charter school growth, the next step is to find academic outcome data over this timespan. Ohio’s primary measure of student achievement for the last two decades has been the performance index, which aggregates student achievement levels across various tests, subjects, and grades. It is a noisy measure, but it goes back to the 2000–01 school year and, thus, enables me to leverage the substantial 1998–2007 increase in charter school market share. In addition to performance index scores, I use graduation and attendance rates that appeared on Ohio report cards from 2002 to 2008 (which reflect graduation rates from 2000–01 to 2006–07 and include attendance rates from 2000–01 to 2006-07).[7]
Using these measures, I estimate statistical models that predict the graduation, attendance, and achievement of district students in a given year (from 2000–01 to 2006–07) based on historical changes in the charter school market share in that same district (from 1997–98 to 2006–07), and I compare these changes between districts that experienced different levels of charter school growth. Roughly, the analysis compares districts that were on similar academic trajectories from 2001 to 2007 but that experienced different levels of charter entry in prior years. A major benefit of this approach is that it essentially controls for baseline differences in achievement, attendance, and graduation rates between districts, as well as statewide trends in these outcomes over time. And, again, because impact estimates are linked to charter enrollments three and four years (or more) prior to the year in which we observe the academic outcomes, the results are driven by charter-school growth between 1998 and 2003—prior to the implementation of No Child Left Behind and EdChoice, and prior to the onset of the Great Recession.
Finally, I conducted statistical tests to assess whether the models are in fact comparing districts that were on similar trajectories but that experienced different levels of charter entry. First, I conducted “placebo tests” by estimating the relationship between future charter market shares and current achievement, attendance, and graduation levels in a district. Basically, if future market shares predict current academic outcomes, then the statistical models are not comparing districts that were on similar academic trajectories and thus cannot provide valid estimates of charter market share’s causal impact. I also tested the robustness of the findings to alternative graduation rate measures and the inclusion of various controls that capture potential confounders, such as changes in the demographic composition of students who remained in districts. The results remain qualitatively similar, providing additional support for the causal interpretation of the estimated competitive effects.[8]
Findings
Finding No. 1: A 1-percentage-point increase in charter school market share led to an increase in district graduation rates of 0.8 percentage points four years later. That implies that districts with a 10 percent charter market share had graduation rates 8 percentage points higher than they would have had in the absence of charter school competition.
I begin with the analysis of graduation rates. Figure 2 (below) plots the estimated impact of increasing charter market share by one percentage point on district-level graduation rates. Roughly, the thick blue line captures differences in achievement between districts that experienced a one percentage point increase in charter market share to those that did not experience an increase. Year 0 is the year of the market share increase, and the blue line to the right of 0 captures the estimated impact of an increased market share one, two, three, four, and five (or more) years later. The dotted lines are 95 percent confidence intervals, indicating that this interval would contain the estimate 95 percent of the time if the statistical test were repeated.
The results indicate that an increased charter market share had no impact on district graduation rates in the first couple of years. However, an increase in charter market share of 1 percentage point led to district graduation rates that, four years later, were 0.8 of a percentage point higher than they would have been in the absence of charter competition. Thus, if the average district had a charter market share of 10 percent in 2003, the results imply that they would have realized graduation rates that are 8 percentage points higher in 2007 (i.e., 0.8 x 10 four years later). For a typical Ohio 8 district that experienced a 14 percent increase in charter market share, that was the equivalent of going from a graduation rate of 57 percent to a graduation rate of 68 percent.
Figure 2. Impact of charter market share on districts’ graduation rates (2001–2007)
Importantly, as the estimates to the left of the y axis reveal, there are no statistically significant differences in graduation rates between districts that would go on to experience a 1-percentage-point increase in market share (in year 0) and those that would not go on to experience that increase. This is true one, two, three, four, and five (or more) years prior. Controlling for changes in districts’ student composition (e.g., free-lunch eligibility, race/ethnicity, disability status, and achievement levels) does not affect the results. Finally, although the estimates in Figure 1 are statistically imprecise (the confidence intervals are large), the Year 4 estimate is very close in magnitude to the statistically significant estimate (p<0.001) based on a more parsimonious specification that pools across years (see appendix Table B1). These results suggest that competition indeed had a positive impact on district students’ probability of graduation.
One potential limitation of this study is that the market share measure includes students enrolled in charter schools that are dedicated to dropout prevention and recovery. If students who were likely to drop out left district schools to attend these charter schools, then there would be a mechanical relationship between charter market share and district graduation rates. This dynamic should have a minimal impact on these graduation results, however. First, in order to explain the estimated effects that show up three and four years after charter market shares increase, districts would have needed to send students to dropout-recovery schools while they were in eighth or ninth grade (they couldn’t be in grades ten to twelve, as the dropout effects show up in Year 4); and these students needed to be ones who would go on to drop out in eleventh or twelfth grade (as opposed to grade nine or ten). That is a narrow set of potential students. Second, for this dynamic to explain the results (where a one-percentage-point increase in charter market share leads to an 0.8-percentage-point decrease in dropouts), then a large majority of the market share increase that districts experienced would need to be due to these students who would eventually drop out. Given the small proportion of charter students in dropout-recovery schools and the even smaller proportion of those who meet the required profile I just described, it seems that shipping students to charters focused on dropout prevention and recovery can be only a small part of the explanation.
Finding No. 2: A 1-percentage-point increase in charter school market share led to an increase in district attendance rates of 0.08 percentage points three years later. That implies that districts with a 10 percent charter market share had attendance rates 0.8 of a percentage point higher than they would have had in the absence of charter school competition.
The results for district attendance rates are also imprecise, with unstable point estimates and large confidence intervals in Years 4 and 5 (or later). But Figure 3 indicates a statistically significant effect in Year 3 of 0.08 percentage points, and this Year-3 estimate is very close in magnitude to the statistically significant estimate (p<0.01) based on a more parsimonious specification that pools across years (see appendix Table B1). For the typical Ohio 8 district, the estimated effect is the equivalent of their attendance rate going from 90.5 percent to 91.6 percent.
Figure 3. Impact of charter market share on districts’ attendance rates (2001–2007)
Thus, as was the case with graduation rates, these by-year estimates are imprecise, but they confirm more precise estimates from models that pool across years, provide evidence that there is a plausible time lag between increases in market share and increases in attendance rates, and provide some confidence that the results are not attributable to pre-existing differences between districts that experienced greater (as opposed to lesser) increases in charter competition. That the timing of attendance effects roughly corresponds to increases in graduation rates provides further support that the results don’t merely capture statistical noise.
Finding No. 3: An increase in charter school market share did not lead to a statistically significant change in districts’ scores on the performance index.
The results for districtwide student achievement indicate no statistically significant effects (see Figure 4, below). Unfortunately, we lack the statistical power to rule out effects that one might deem worthy of attention. Additionally, the immediate (statistically insignificant) decline in the performance index in the year of the market share increase (Year 0) might be attributable to relatively high-achieving students leaving for charter schools and thus might not capture changes in student learning. If high-achieving students were more likely to go to charter schools, then districts’ performance index scores should decline in exactly the year that charter market shares increased.[9]
Figure 4. Impact of charter market share on districts’ scores on the performance index (2001–2007)
The results of a simple model that pools across years indicates a negative relationship between charter market share and district performance index scores (see Table B1 in the appendix). The results in Figure 4, however, put into question this negative correlation between charter market share and district performance index scores. Controlling for future market share (as does the model used to generate Figure 4) renders statistically insignificant the estimates from Year 1 to Year 4. That the coefficient for five years (or more) prior is -0.04 and nearly statistically significant suggests that the relationship in Table B1 between market share and the performance index may be attributable to the fact that districts experiencing declines in achievement were more likely to subsequently experience charter school growth, as opposed to the other way around.[10] The estimate from the simple performance-index model that pools across years is also the only one that is not robust to limiting the analysis to pre-NCLB years (see Table B1 in the appendix).
Despite the somewhat imprecise (and perhaps invalid) statistical estimates of the impact of charter market share on districts’ performance index scores, what one can say is that the analysis rules out large declines in the achievement levels of district students. Additionally, these results are similar to those of a 2009 RAND study that found no statistically significant differences in student-level test score growth among students who attended a traditional public school that had a charter school in close proximity, as compared to students whose traditional public schools were farther from the nearest charter school. That study did not leverage the initial growth in the charter school sector, but it provides a different type of evidence and relatively precise estimates.
Thus, in spite of the potential limitations related to changes in student composition and imprecise (and perhaps invalid) statistical estimates, the results of this analysis provide one more piece of evidence that charter school competition did not have negative effects on student learning in district schools.
What can we learn from what happened from 1998 to 2007?
The introduction of charter schools in Ohio significantly disrupted school district operations. For example, in 2002, EdWeek documented Dayton Public Schools’s newfound dedication to academic improvement in response to its rapidly expanding charter sector. As Chester E. Finn, Jr. discussed in a post that same year, the district considered a number of reforms—notably the closure of under-enrolled and under-performing schools, which Feng and Harris’s recent study identified as the most likely mechanism explaining the positive impact of charter school competition on districtwide academic outcomes. The results above suggest that, for the average Ohio district experiencing charter school growth, these efforts did not yield large positive impacts on student achievement (though they very well may have in Dayton[11]), nor any discernable negative impacts.
On the other hand, the average Ohio district’s response to charter school competition led to increases in attendance and graduation rates. The more charter competition a district felt, the less likely their students were to miss school or drop out three or four years later. That charter school competition appears to have spurred improvements in Ohio school districts between 2001 and 2007 is particularly remarkable given how maligned Ohio’s charter sector was in those days. Charter schools were not nearly as effective in those early years as they are today (though the best evidence for that time period indicates that brick-and-mortar charter schools were no worse, on average, than district schools). Why that may have occurred is a topic for another day, but one wonders whether keeping students in school (and, thus, keeping the state funds that follow them) became more important to districts as they began to face competition. For now, though, the analysis above provides some further reassurance that it is worthwhile to draw attention to districts with solid charter market shares as an indicator of healthy school marketplaces.
About the author and acknowledgments
Stéphane Lavertu is a Senior Research Fellow at the Thomas B. Fordham Institute and Professor in the John Glenn College of Public Affairs at The Ohio State University. Any opinions or recommendations are his and do not necessarily represent policy positions or views of the Thomas B. Fordham Institute, the John Glenn College of Public Affairs, or The Ohio State University. He wishes to thank Vlad Kogan for his thoughtful critique and suggestions, as well as Chad Aldis, Aaron Churchill, and Mike Petrilli for their careful reading and helpful feedback on all aspects of the brief. The ultimate product is entirely his responsibility, and any limitations may very well be due to his failure to address feedback.
Endnotes
[1] An open-access version of the paper is available here, and an accessible summary of an earlier version of the paper is available here. These results are consistent with those of a prior Fordham study.
[2] Note that their analysis leaves out students in virtual charter schools and those serving special-education students, which suggests that the participant effects should be positive.
[3] The primary limitation of Chen and Harris’s analysis relates to their data. Their study measures important quantities with significant error (e.g., charter market share and graduation rates), does not exploit pronounced differences in charter school growth between districts (e.g., their achievement data begins in 2009, well after the initial and steep charter school growth I examine in my analysis), and focuses on years after the implementation of No Child Left Behind and the onset of the Great Recession (both of which disproportionately affected districts with growing charter sectors). These limitations likely make it difficult to detect effects in specific states, particularly states like Ohio, where the measurement error and lack of market-share variation is significant. I am not criticizing the quality of their valuable nationwide analysis. The data they use are the only option for conducting a rigorous nationwide analysis, as they need measures that are available across states. But when estimating Ohio-specific estimates of charter school effects, these limitations might preclude detecting effects because the signal-to-noise ratio is too low. I provide further details in the appendix.
[4] I thank Jason Cook for kindly sharing these data with me, which he collected for this study of charter competition’s impact on district revenues and expenditures. Note that Cook’s study estimates charter enrollment effects in the post-NCLB period, which may introduce some complications that my study seeks to avoid.
[5] The Ohio 8 districts are Akron, Canton, Cincinnati, Cleveland, Columbus, Dayton, Toledo, and Youngstown.
[6] Average market share increases more slowly and unevenly after 2007, as charter closures became more prevalent in districts with more mature charter sectors. Thus, although average enrollments continued to increase statewide through 2014, there is not a clean upward trajectory in charter market share in every district.
[7] These graduation rates are not as good as the cohort-based graduation rates introduced in later years, but they cover the same time span as the performance index and are based on calculations that account for actual enrollments and dropouts in every high school grade.
[8] Specifically, I estimated two-way fixed-effects panel models with lags and leads of district market share as predictor variables and 2001–2007 achievement, attendance, and graduation rate data as the dependent variables. Scholars have recently identified potential problems with these models, and there are concerns about the extent to which they capture “difference in differences” comparisons that warrant a causal interpretation, which is why I sometimes use qualifiers such as “roughly” when describing what the estimates of my analysis capture. The basic model includes district and year fixed effects, but the results are qualitatively similar when I control for time-varying demographics (e.g., student free-lunch eligibility, race/ethnicity, and disability status). These robustness checks, in conjunction with the use of leads that allow for placebo tests and control for potential differences in district trends, provide reassurance that the estimates are credible. The appendix contains a more precise description of the statistical modeling and results.
[9] Note that there is no estimated change in Year 0 for the attendance and graduation analyses, and if students more likely to attend school and graduate were the ones who switched to charters, that should have led to lower district attendance and graduation rates.
[10] Indeed, this potential explanation is consistent with the design of the charter school law, which in later years permitted the establishment of charter schools in districts that failed to reach performance designations (which were based in large part on the performance index).
[11] Unfortunately, Dayton is one of the handful of districts for which I am missing initial years of data, which means its 2002 efforts—in response to enrollment losses in the preceding two years—do not factor into the estimates above. Additionally, the statistical analysis cannot speak to the effects in a specific district.