If you’ve been on social media lately, you’ve doubtless encountered fictional stories and essays generated by “ChatGPT,” an artificial intelligence program that can generate remarkably solid pieces of prose in response to prompts both serious and whimsical, and can do so instantly and in any imaginable style.
Some find this thrilling. Others, mostly writers and teachers, are filled with existential dread. “My life—and the lives of thousands of other teachers and professors, tutors and administrators—is about to drastically change,” wrote English teacher Daniel Herman in The Atlantic.
My hunch is that’s a lot less true than he thinks. For starters, let’s immediately dispense with the idea that artificial intelligence will make writing instruction obsolete. As Herman put it, “It’s no longer obvious to me that my teenagers actually will need to develop this basic skill.” Remember the “twenty-first-century skills” movement? There was breathless insistence that, with all the world’s knowledge now in our pockets and mere keystrokes away, K–12 education should prioritize critical thinking, problem solving, creativity, and communication. The question on the lips of education’s smart set back then was, “Why cram kids’ heads with a bunch o’ facts when we have Google?” Similarly, why teach writing when “AI” can generate sophisticated text at the touch of a button?
E.D. Hirsch, Jr. nailed the answer twenty years ago. “The Internet has placed a wealth of information at our fingertips. But to be able to use that information—to absorb it, to add to our knowledge—we must already possess a storehouse of knowledge,” he wrote. “That is the paradox disclosed by cognitive research.” More recently, University of Virginia professor Dan Willingham wrote that research in cognitive science has shown that “the sorts of skills that teachers want for students—such as the ability to analyze and to think critically—require extensive factual knowledge.”
In other words, it takes knowledge to communicate knowledge—or even to have the discernment to judge whether an AI-generated piece of text makes sense or sufficiently responds to a prompt. Herman writes that what GPT can produce right now “is better than the large majority of writing seen by your average teacher of professor.” Perhaps so, but this is a problem unique to the education equivalent of the “worried well” and their teachers, not the far larger number of students for whom constructing a tolerable five-paragraph essay is a daunting challenge. A tiny minority of American high school students are like Herman’s, discussing and analyzing “Anzaldúa’s radical ideas about transcending binaries, or Ishmael’s metaphysics in Moby-Dick.”
Skilled teachers who know their students have seldom failed to notice when a turned-in assignment has the thumbprint of a little extra help from home or simply doesn’t sound like original work. It will be no different with AI. Another English teacher, Peter Greene, whose rural Pennsylvania students are far more typical than those in an elite prep school, noted in Forbesthat ChatGPT covers gaps in its knowledge by making things up and embellishing. “In other words, it has an eerily human capacity for bullshitting its way around gaps in its data base.” Teachers who know their students will not easily be fooled. Greene makes the excellent suggestion that teachers try out their writing assignments on the chatbot. “If it can come up with an essay that you would consider a good piece of work, then that prompt should be refined, reworked, or simply scrapped.”
Concerns that AI makes writing instruction obsolete are manifestations of the “Curse of Knowledge,” an idea popularized by Chip Heath and Dan Heath in their 2007 book Made to Stick. The curse of knowledge is “a cognitive bias that occurs when an individual, communicating with other individuals, unknowingly assumes that the others have the background to understand.” It’s a common problem in education: To the well-educated and language proficient, problem solving, critical thinking—and clear, sophisticated written analysis—all feel like “skills” that can be practiced and mastered (or plausibly faked via artificial intelligence) because they are already rich in knowledge and sophisticated language. Herman worries that AI will make it easy for students to avoid “doing the hard work of actual learning,” but there’s no reason to think the average high school student could even read the machine-generated essays ChatGPT creates, let alone plausibly pass one off as their own work.
In sum, the threat of ChatGPT is not that it will make writing instruction obsolete. It’s the assumption that it will make writing instruction obsolete that we should be on guard against. The world of futurists, technology enthusiasts, and educated elites is simply not the same as the world occupied by the substantial majority of students struggling to reach basic levels of language proficiency. Artificial intelligence will provide time-saving tools for knowledge-haves, but if we make the same mistake made by promoters of “twenty-first-century skills,” it will be fatal to the interests of knowledge have-nots, who will not be given the opportunity to develop the knowledge base the well-educated take for granted, and which make those tools useful.
In the wake of pandemic-related learning loss, there’s widespread agreement that we must find more time for learning and a number of schools and districts have added afterschool tutoring and summer school to their calendars. While such additions to schools’ teaching and learning time have benefits, intensifying our focus on how we use the classroom time we already have is even more important. To that end, I urge consideration of “learning density.”
By which I mean the amount of new learning—concepts, vocabulary, insights, and reasoning—jammed into every chunk of a lesson. It’s something we’ve been thinking about a lot lately at Success Academy, the charter school network I founded and lead.
In classrooms with high learning density, every minute is packed with penetrating questions, purposeful discussion, new knowledge, and high-quality ideas. Students love being there because they’re making connections and gaining insight at a fast clip, and it’s more enjoyable for teachers because each lesson does more to boost students’ intellectual growth.
Learning density is different from instructional efficiency, which is about eliminating pockets of wasted time during lessons. Instructional efficiency has long been a focus in high-performing charters, and is an important reason why Success Academy and other schools embrace high standards for student behavior and efficient routines for everything from entering the classroom, to handing out materials, to moving around the room.
Instructional efficiency is limited, however. Achieving it allows teachers to cover more material faster but leaves aside the question of the quality of learning—its richness, depth, and range. In a math class with instructional efficiency, for example, students may briskly learn how to employ a new procedure to solve a specific kind of problem and have several opportunities for practice. In one with high learning density, by contrast, students learn not only how to solve the problem, but also why the procedure works and the mathematical concept underlying it. Along with computational skills, they practice mathematical reasoning and gain conceptual understanding, which will help them tackle unfamiliar problems in the future. In the same amount of time, students’ knowledge grows along more dimensions.
When we observe lessons in our schools, we evaluate them through the lens of learning density. We often close our eyes for five minutes, listen closely, and ask ourselves, “Is my knowledge and understanding of the problem or text increasing in this chunk of time? Would I want to be a student in this class?” Too often, the answer to both these questions is no—but I think we get ahead of the game just by asking them.
Educators tend to be satisfied by any lesson that has active student participation and little obvious disruption. Yet what’s often happening is that teachers simply “pass the mic” without driving toward a clear intellectual purpose, frittering away both their and their students’ time. The end result is disengagement—particularly among students who struggle most—and high rates of academic failure. Teachers, meanwhile, feel exhausted and rushed for time because they constantly have to make up later what students didn’t learn.
This is a tragedy for both kids and adults. If we want our pupils to have a productive, impactful schooling experience—and want teaching to be enjoyable and manageable for educators—class time must be packed with dense, layered learning. This starts with high-quality curricula and instructional materials. Works of literature and primary sources should be rich in vocabulary, knowledge, and ideas. Math problems and science experiments should push students to grapple with essential underlying concepts. Assignments and assessments should be intriguing and spark deep thinking.
We must also pay far closer attention to instructional practices. Are teachers starting off discussions with a challenging question that gets students thinking right away or one that pitches to the lowest level? Are they introducing new knowledge, vocabulary, and ideas or reteaching what students already know? Are they promptly correcting inaccuracies or wasting time debating questions with clear yes and no answers? Are they delighting in a spot-on idea and building on it or failing to distinguish low- and high-quality ideas? These are all important questions for determining the level of learning density in a classroom.
Needless to say, a peaceful, orderly classroom environment is an essential precondition for achieving high learning density. Once routines and systems are established, however, going for high learning density is itself a powerful classroom management strategy. Students don’t act out when they’re being provoked, challenged, and stretched by difficult and worthwhile questions and problems.
As schools and districts work to address learning loss in the wake of the pandemic, an intensive focus on learning density is a strategy that will allow teachers to engage students, supercharge learning, and reduce academic failure. It doesn’t require additional funding or major systemic reform. It also makes the job of teaching far easier: Students learn more, their outcomes improve, and the need for reteaching recedes. But getting to learning density requires training and coaching, along with time for intellectual preparation so teachers can develop their own high-quality ideas about the material and questions that spark student thinking. Ensuring that teachers and leaders have the time and space to pursue this goal should be a top priority for schools across the country.
Every year, I ask my students to memorize a poem, but I intentionally avoid using the word “memorize.” Rather, they must learn Robert Frost’s Stopping by Woods on a Snowy Evening, Langston Hughes’s A Dream Deferred, or some other famous verse by heart. When memorized, poems become something we hold dear in our hearts and minds, growing almost into a mental keepsake. But there are also demonstrable educational reasons to do so.
Throughout history, literary and artistic greats have employed memorization as part of their creative development. A common practice in jazz circles is the act of memorizing and transcribing famous solos. There are articles online dedicated to listing the classic improvisations that any aspiring musician ought to have memorized. Charlie Parker famously memorized Lester Young solos, transposed them into every key, and practiced them at double or triple their original tempo. Shakespeare would likely have spent time memorizing Holinshed’s Chronicles, where he drew inspiration for his history plays. Augustine would have learned entire passages of Virgil by heart. Each of these literary, musical, and philosophical masters, through memorizing the best of their predecessors, thereby developed a palette for sound, rhythm, imagery, and melody.
In his book How to Think Like Shakespeare, Scott Newstok outlines many of the presuppositions of enlightenment and medieval education and emphasizes today’s altered understanding of creativity. “If anything,” he writes, “creative imitation...has been the hallmark of art and industry” for most of human history. Upon discovering a piece of writing they admired, Robert Louis Stevenson, Benjamin Franklin, and countless other authors would perform imitative practices to perfect their own craft—reproducing it directly, memorizing it, or attempting to mimic its style. “By trying to sound like someone else,” Newstok writes, “you begin to sound ‘like yourself.’” In memorizing literature, our students implicitly adopt the best that is in authors and can begin to employ these skills, patterns of thought, lines of argumentation, and knowledge in their own thinking and academic work.
There’s also a strong knowledge-rich justification for the memorization of literature. Perhaps students may initially learn the words of the Preamble to the U.S. Constitution or a poem of Dickinson without profound understanding. Nonetheless, young children can delight in nursery rhymes—the rhythm, the symmetry, the sing-song nature—even before they fully understand the words. These words, locked into their memory for recall, allow them later to compare and analyze when they have gained the capacity to fully understand them. If “critical thought” is a process of making connections, as Rousseau contends, then students need nodes to connect, and memorized verse forms powerful nodes that can supercharge thinking later when connected. They serve as models as we focus on every word, detail, image, grammatical construction, passing emotion, punctuation or lack thereof, and poetic or rhetorical detail.
It is common parlance in education to hear “I want to teach students how to think, not what to think.” But the how must begin with the what. Critical thought requires knowledge. For our students to think well, we must give them something to think about. Like a painter and their paints, our students need intellectual supplies to think with. Beautiful imagery, profound arguments, pithy lines, famous phrases, and the like are some of the best supplies of thought that they could acquire.
If we want them to think critically about politics, what better than the knowledge of and reflection on the Bill of Rights that would come through memorizing it? If we want to teach them to analyze literature, how better than to store away some of the most commonly alluded-to verses from scripture?
Even before a child fully grasps the meaning of a nursery rhyme, they develop a sense of rhythm, order, and balance. They get fleeting images that store in their memory even if some of the finer points elude them. They also store complex syntax and familiarize themselves with unique grammar constructions to employ in their own writing. Complex language becomes second nature.
Memorization doesn’t inhibit the growth of the self or critical thought. Rather, it strengthens the mind, focus, and attention to detail as drills and strength training do the body. It enhances the self, providing students with knowledge and language that they can use against lazy thinking and propaganda. Memorization does not stifle freedom but, through difficulty now, allows students to make better use of that freedom later on. It enlarges, strengthens, and frees the self.
My own students demonstrate the fulfillment and creativity that come with memorization. They have to memorize literature every year in my class, sometimes a poem for speaking to the class and at other times lines from Shakespeare for performance. The poetry last year was just delightful to watch. Students paced the room, speaking to the sky, to the crowd, to different students depending on the line. Some spoke in accents and performed pre-planned motions. One student practiced with her friends, so the friends acted out Edgar Allan Poe’s Annabel Lee as she spoke it. In my Shakespeare unit, the performance of a scene is the final assignment, and students must memorize their lines. The laughter that echoes from the halls as they plan their costumes, setting, and performance paints the exact opposite of the false caricature of students sitting dully at their desks reciting lines of Whitman in choral response to a teacher in front.
What do I most value about memorization? It’s the number of students who have returned to my class to tell me, with a smile on their face, that they still remember Wordsworth’s daffodils and giddily recite it for me—simple delight and joy in beautiful literature.
It’s poignant to read the mainstream media fanfare and reviews that greeted William J. Bennett’s The Book of Virtues when it was published in 1993, and to reflect on the America that made the book an unlikely cultural phenomenon: 3 million copies sold, spin-off books for young children, even a cartoon series on PBS, which ran for nearly five years.
A few months after it took off, Newsweek put Bennett on its cover along with Peggy Noonan and Hillary Clinton (!), a trio of haloed and white-robed “virtuecrats” fighting America’s moral decline. Stifling a smirk, writer Howard Fineman dubbed Bennett a one-man “cottage industry of character education—a multimedia McGuffey,” yet the magazine’s own polling forced him to concede that Bennett was on to something: More than three-quarters of Americans (76 percent) agreed that “the United States is in moral and spiritual decline.” The fraying of the nation’s social fabric could no longer be dismissed as merely “the crotchety preoccupation of the cultural right.” So undeniable was the appeal of Bennett’s character campaign that Fineman predicted the rise of “a new kind of politics that could dominate the decade if economic or foreign crises don’t intervene.” A year later, Monica Lewinsky began her White House internship.
Bennett the virtuecrat never launched the presidential bid that Newsweek and so many others assumed he would. Eventually, the nation that kept The Book of Virtues number two on the New York Times best-seller list for eighty-eight weeks would choose a president whose lack of the virtues Bennett extolled—self-discipline, honesty, and responsibility—were less a political liability than a personal brand. The nation still awaits the rise of a virtuous political class.
Likewise, it’s hard not to be wistful perusing the newly-published thirtieth anniversary edition of The Book of Virtues. My memory betrayed me in revisiting the anthology for the first time in decades; I recalled, incorrectly, that it was written for young people. Organized by individual character traits—compassion, friendship, faith, and so on—each chapter starts with nursery rhymes and light verse, suitable for reading aloud to children. But the anthology’s selections grow longer, richer, and more challenging from the beginning to the end of each chapter: Plato on self-discipline from “Gorgias”; an 1852 speech by Frederick Douglas holding America accountable for the sin of slavery; the Agincourt speech from Shakespeare’s “Henry V.” The meatier fare invites quiet, contemplative reading after the kids are in bed. It’s a moral primer, but it’s not homework. Entries include items on the high school basketball team memorialized in the movie Hoosiers and the “Thrilla in Manila” heavyweight title fight between Joe Frazier and Muhammad Ali. One delightful entry in the chapter on courage is also the anthology’s shortest: an advertisement placed in London newspapers by Ernest Shackleton in 1900 seeking volunteers for his Antarctic expedition, which read, in full:
MEN WANTED FOR HAZARDOUS JOURNEY
Small wages, bitter cold, long months of complete darkness,
constant danger, safe return doubtful.
Honor and recognition in case of success.
The response was “overwhelming,” Shackleton later said.
Nearly one-third of the selections in the thirtieth anniversary edition are new, including entries about Mother Teresa, Jaime Escalante, and Navy Seal Michael Murphy, a posthumous Medal of Honor recipient who died saving his comrades in Afghanistan. No thought was given to updating the list of virtues themselves, however. “You know, people are honest or they are not. They believe in the value of work or they don’t,” Bennett remarked when I spoke with him recently. “C.S. Lewis said you could no more invent a new virtue than a new color.”
True, but some are more in season than others. Bennett took rueful note of changing attitudes about work, one of the book’s ten virtues, and the millions of American men in their prime years who are forgoing meaningful work, their lives consumed by screen time and, in too many cases, opioid abuse. “The person who has missed the pleasures of work, of a job well done, has missed one of the great pleasures of life. Hard work can be done by people on the left or on the right,” Bennett said, noting that, even thirty years ago, some critics carped that he had produced a book of conservative virtues. “There was a book of liberal virtues,” he said. “It didn’t sell.”
The 1990s were no less obsessed with “multiculturalism” than readers today are with “identity.” Both the original Book of Virtues and the revised edition stand up well to the demands of diversity, with entries by and about Martin Luther King Jr., Harriet Tubman, James Baldwin, and others, and folktales from many traditions. This will likely not be sufficient to appease present-day readers less concerned with which authors are included than with which ones are cancelled and purged. Perhaps Bennett and his wife and co-editor Elayne might yet be persuaded to add one more virtue in future editions: tolerance.
Bennett’s anthology pairs nicely with E. D. Hirsch, Jr.’s Cultural Literacy, another out-of-nowhere phenomenon of the 1990s, which preceded it by a few years. Bennett, the conservative secretary of education under Ronald Reagan, was an early and enthusiastic proponent of Hirsch, the liberal English professor from the University of Virginia. Each man saw the timeless appeals of these tales and figures to advance their projects. For Bennett, stories drawn from the Bible, Greek mythology, fairytales, oft-recited poems, and episodes from American history cultivated character. For Hirsch, they were essential components of the common store of background knowledge, allusions, and idioms on which reading comprehension depends.
Our political divide and the partisan media ecosystem that feeds it makes its unlikely, alas, that an anthology edited by a prominent conservative intellectual can win even a fraction of the attention that greeted the original volume. “Look at the blurbs on the back cover,” Bennett said, referring to laudatory pull quotes from Time, Newsweek, the Washington Post, and People, praising the original Book of Virtues. “I don’t think I’d get those today,” he conceded, though the evidence of moral decline that so fussed the editors of Newsweek thirty years ago is no less compelling today. “Trust in institutions was down, people were worried about schools and family formation,” Bennett observed. “Those are even bigger concerns now.”
Editor’s note: This was first published in the Manhattan Institute’s City Journal.
Authorized by Congress in 2004, the Washington, D.C., Opportunity Scholarship Program (OSP) is one of the longest-standing voucher programs in the nation. It is also the only federally-funded one. While its fortunes have changed as congressional and executive branch leadership has switched parties over the years, the program has endured. As a result of its longevity, OSP is a perfect lens through which to study the long-term effects of voucher usage. New research in the AERA Journal looks at the impact of the program on participants’ college enrollment rates.
Researchers Matthew Chingos and Brian Kisida go back to the first two OSP application rounds in 2004 and 2005 and identify 1,776 students who entered the voucher lottery—they applied for seats in oversubscribed grade levels—and who can also be observed in OSP administrative records for at least two years after their expected high school graduation. National Student Clearinghouse data provide college enrollment information.
The treatment group comprised 1,059 students who won the lottery and were offered an OSP voucher; lottery non-winners were assigned to the control group. OSP applicants overall had an average family income of about $18,000 in 2004–05, and less than 20 percent came from married, two-parent families. Almost all were Black or Hispanic, and almost a quarter were already utilizing school choice by attending a charter school at the time of application. Treatment group students were slightly more likely than control group students to be Black or Hispanic and had slightly lower family income. Chingos and Kisida run an intent-to-treat (ITT) analysis, noting that only 74 percent of those offered a voucher actually used one and some students in the control group accessed private schools without the aid of OSP. Of those who accepted the offered voucher, approximately 30 percent utilized it for four or more years.
The college enrollment rates for the treatment and control groups were the same, statistically speaking, though voucher recipients were slightly less likely to enroll within three years of graduation, but slightly more likely to enroll four and five years after graduation. Chingos and Kisida theorize that these patterns could be caused by delays in graduation for voucher students needing extra time to complete diploma requirements (which could include higher requirements for private school students than for their public school peers) and observed improvements in the overall educational outcomes in D.C. districtand charter schools after 2004.
Unfortunately, the data do not yet reach far enough to look at college completion outcomes for OSP students, an important point where outcomes for the two groups might diverge significantly. That is a study for another day.
The SUNY Reading Corps, made up of pre-service teachers, provides high-dosage tutoring in reading to New York City students. —Chalkbeat
The president of the Carnegie Foundation wants high schools to move away from the Carnegie Unit, the standard for credit hours it developed over a century ago. —Education Week
An Indiana program that allows high school students to take career-oriented pathways to graduation could be expanded statewide. —Chalkbeat
Not content to rest on his laurels, Arizona Governor Doug Ducey became a champion of school choice in his final year in office. —Keri D. Ingraham
Officials in Texas are holding the state’s largest alternative certification program accountable by moving to revoke its accreditation due to poor performance. —Dallas Morning News
Jeers
A $123 million program in Los Angeles to offer extra school days to offset pandemic-related learning loss is poised to have dismal attendance, with just one in nine eligible students signing up. —Los Angeles Times
D.C. is one step closer to changing its school funding model for the worse. Money should follow the child, and when enrollment drops, so too should funding. Otherwise, schools that lose families end up with more money per kid, which isn't fair. —Washington Post
A new study finds a “perception gap” between what Republicans and Democrats think each other believes about the education culture wars and what those groups actuallybelieve. —Education Week
The headline of this Cincy piece promises to tell us about “the four pandemic changes to education that are here to stay.” It’s an interesting discussion, I guess, but my list would be very different. (Cincinnati Enquirer, 12/12/22) On a related note, the superintendent of Cincinnati City Schools gave her state of the district address this week. While she does say she’s interested in changing things post-pandemic—“Let’s take it back to the basics and that’s what we’ve done. The basics,” she said. “And nothing is more basic than the ABCs: Academics, behavior, and culture.”—her ABCs don’t sound anything like the Three Rs to me. Now that would be a back to basics approach. (WVXU-FM, Cincinnati, 12/13/22)
School options have increased in Ohio and across the nation in recent decades. One prominent option is publicly funded scholarships (or “vouchers”) that families can use to send their children to participating private schools. Today, over 75,000 Ohio students participate in one of the state’s five voucher programs, the largest of which are known as EdChoice. Despite their growing popularity, critics have argued that expanding voucher programs harms traditional school districts. But are they right?
Authored by Ohio State University professor Dr. Stéphane Lavertu, this report explores the impacts of Ohio’s EdChoice program on school district enrollments, finances, and educational outcomes. The study includes detailed analyses of the state’s “performance-based” EdChoice program that, as of 2021–22 provides vouchers to approximately 35,000 students as well as its “income-based” EdChoice program which serves approximately 20,000 low-income students.
To read the full report, scroll down or download the PDF (which also includes technical appendixes).
Foreword
By Aaron Churchill and Chad L. Aldis
Try saying it with us: “Choice and competition are good.” Don’t take our word alone. On the left, President Joe Biden said:
The heart of American capitalism is a simple idea: open and fair competition—that means that if your companies want to win your business, they have to go out and they have to up their game. . . . Fair competition is why capitalism has been the world’s greatest force for prosperity and growth.[1]
And on the right, President George W. Bush stated:
Free market capitalism is the engine of social mobility—the highway to the American Dream. . . . If you seek economic growth, if you seek opportunity, if you seek social justice and human dignity, the free market system is the way to go.[2]
Their remarks, of course, refer to economic systems writ large, while the report that follows is about primary and secondary education. Even so, in the world of K–12 education, choice and competition are also the way to go. Choice empowers families to select the schools that best fit the needs of their kids, be they district options or alternatives such as public charter schools or private schools. No longer are parents shackled to a singular, residentially assigned, district-operated school that may or may not educate their children to their full potential.
Competition—the other side of the same coin—encourages schools to pay closer attention to the needs of parents and students who, if dissatisfied, can vote with their feet and head for the exits. This sort of “market discipline” helps to promote productive habits and effective practices across both the public and private school sectors.
National polling shows that strong majorities of Americans agree with the intuition behind expanded school choice.[3] Rigorous analyses also reveal benefits to students when states foster choice-friendly environments. Studies on the “competitive effects” of choice programs (both public charter and private school voucher) in states such as Arizona, Florida, and Louisiana have uncovered academic gains for students who stay in traditional public schools.[4] A sweeping national study from our Fordham Institute colleagues in Washington, D.C., found that district and charter students make stronger academic progress when they live in metro areas with more options.[5]
Closer to home, we published a 2016 study by Drs. David Figlio and Krzysztof Karbownik finding that Ohio’s private school voucher program, known as EdChoice, lifted achievement in district schools during its very first years (2006–07 to 2008–09).[6]
So what brings us here again?
In short, the detractors haven’t gone away, the self-interest of the “education establishment” remains strong, and accusations continue to fly that choice programs harm traditional public schools. Here in Ohio, the latest incarnation is the “Vouchers Hurt Ohio” coalition, a group of lobbyists, district administrators, and school boards that have sued the state in an effort to eliminate EdChoice, the state’s largest voucher program, which now serves some 60,000 students. Presently pending before a Franklin County judge, their lawsuit claims that EdChoice hurts district finances and results in more segregated public schools. Though not expressly stated, their claims imply that district students themselves may also be harmed academically.[7]
The plaintiffs build their case around basic school funding and demographic statistics. Their numbers, however, cannot prove cause and effect. For instance, as evidence of segregation, they cite data from the Lima School District, which enrolled 54 percent students of color pre-EdChoice, a number that has since risen to 65 percent. Their implication is that EdChoice is responsible, but they ignore even the possibility that this trend could reflect larger demographic shifts that are entirely unrelated to vouchers (e.g., White families exiting the district and/or minority families moving in). Unless one conducts rigorous analyses that separate the impacts of EdChoice from other factors, superficial statistics are apt to mislead.
To conduct a more careful investigation of the impacts of EdChoice, we turned to Stéphane Lavertu of The Ohio State University. He is a fine scholar, with a strong track record of evaluating various school choice and accountability initiatives in the Buckeye State. His work includes a widely cited national evaluation of charter school performance for the RAND Corporation, along with numerous studies on local school governance and policy that have appeared in peer-reviewed academic outlets such as the Journal of Public Economics, American Political Science Review, Educational Evaluation and Policy Analysis, and Journal of Policy Analysis and Management.
The truth of the matter turns out to be far different than the critics’ claims. In studying the state’s performance-based EdChoice program[8] from inception to the year just prior to the pandemic (2006–07 to 2018–19), Dr. Lavertu uncovers the following:
Improved district achievement. Importantly, he finds that EdChoice did not harm district students and, in fact, led to modest achievement gains. Although affected districts still tend to struggle overall academically, these incremental improvements suggest that a more competitive environment may have helped focus districts on academic success. The results here are consistent with other studies, including the aforementioned Ohio analysis by Figlio and Karbownik, showing a positive influence of choice programs on district performance. Yes, that is right: a positive influence on district achievement, even as some students depart for private alternatives.
Reduced district segregation. Due to their geographic boundaries and residential patterns, Ohio school districts were highly segregated long before vouchers existed. Fortunately, EdChoice has helped to lessen district segregation—no doubt reflecting the strong participation of Black and Hispanic students in the program (55 percent of voucher recipients in 2018–19). With such large numbers of Black and Hispanic students moving to the private school sector, the likelihood that public school students are in a racially “isolated” school has declined.
No impact on district expenditures per pupil. Contrary to the critics’ chief allegation—that vouchers “drain money” from districts—EdChoice does not decrease the per-pupil expenditures of districts. The neutral fiscal impact should be expected. While seeing reductions in state aid when students participate in choice programs, districts experience an increase in local funding per pupil, because those taxpayer dollars do not “follow” voucher (or charter) students to their schools of choice.
The available data did not allow Dr. Lavertu to draw clear conclusions about the causal impact of income-based EdChoice, which bases voucher eligibility on family income.[9] That being said, the evidence—circumstantial as it may be—indicates that this newer program (launched in 2013–14) has no adverse impact on districts.
“The sky is falling,” cry the Chicken Little critics of Ohio’s choice programs. But they are mistaken. District students are not left as “collateral damage” when parents have more education options and decide to pursue them. Quite the contrary: the increased competition seems to stir traditional public schools to undertake actions that benefit district students. For judges, journalists, and Ohioans everywhere, we say this: don’t believe the critics’ hype. And for Ohio policymakers, we say: keep working to expand quality educational choices. Free and open competition is known to fuel economic growth. In the same way, it can also drive excellence, innovation, and greater productivity in our public and private schools.
Executive summary
Ohio’s Educational Choice Scholarship Program (EdChoice) is the state’s largest voucher program. It provides state funds to cover all or part of private school tuitions for eligible students. Since the 2006–07 school year, the original EdChoice program—known as “performance-based EdChoice”—has provided scholarships to families whose children attend traditional public schools that the state deems to be poorly performing. And since 2013– 14, the EdChoice expansion program—known as “income-based EdChoice”—has provided scholarships to low-income families whose children do not qualify for the performance-based program.
The central purpose of EdChoice is to provide more educational options for economically or educationally disadvantaged students. Another purpose is to introduce competition into the education system to incentivize improvement in traditional public schools, thereby also benefiting students who do not participate in the program. Although earlier research has found some evidence of positive competition-related effects in the initial years of the performance-based EdChoice program, a recent lawsuit claims that Ohio’s EdChoice programs have harmed district finances, contributed to racial segregation, and undermined the education of district students. The purpose of this report is to determine what discernible effects EdChoice has in fact had on Ohio districts as its programs have expanded.
We estimate the impact of the EdChoice programs by comparing changes in district outcomes (from before these programs were in place to the 2018–19 school year) between districts that had higher as opposed to lower levels of exposure to them. Research demonstrates that such comparisons can allow one to estimate the causal impact of a policy if certain assumptions are met. As we detail in the report, our analysis of the performance-based EdChoice program arguably satisfies those assumptions and, thus, enables us to make claims about its causal impact on Ohio school districts. Unfortunately, our analysis of income-based EdChoice reveals that the research design’s assumptions are unlikely to hold. Consequently, when it comes to income-based EdChoice, we can speak only to the relative changes in outcomes for districts with higher (as opposed to lower) levels of exposure to the program. We cannot claim that income-based EdChoice caused those relative changes.
Here, in brief, are our estimates of the performance-based EdChoice program’s impact as of 2019 (thirteen years after the program’s start in the fall of 2006) on the average public school district exposed to it:
District enrollments were approximately 10–15 percent lower than they would have been had districts not been exposed to the program. These effects are driven by relative declines in the enrollment of Black students, not Asian American, Hispanic, or White students.
Racial and ethnic segregation was approximately 10–15 percent lower than it would have been had districts not been exposed to the program. Specifically, as a result of the EdChoice program, Black, Hispanic, and Native American students attended schools with a greater share of White and Asian American students than they would have in the absence of the program.
Total district spending per pupil (including both capital outlays and operational spending) was no different than it would have been had districts not been exposed to the EdChoice program. There is suggestive evidence that the program in fact led to a relative increase in districts’ operational spending per pupil. That is because EdChoice had no impact on districts’ ability to raise local revenue. Combined with the decrease in enrollments, this dynamic led to a 10–15 percent increase in local revenue per pupil. Consistent with these estimates, the analysis reveals no statistically significant impacts on district property values, local property tax rates, local property tax revenues, levy passage rates, or state-defined fiscal stress designations.
The academic achievement of district students—as measured by the state’s performance index—was significantly higher than it would have been had districts not been exposed to the EdChoice program. For the average student in a district exposed to performance-based EdChoice, their district’s achievement went from approximately the second percentile (the twelfth-lowest-achieving Ohio district) to approximately the sixth percentile (the thirty-seventh-lowest-achieving Ohio district). This effect is consistent with the program targeting low-achieving schools, thus increasing average district proficiency rates among remaining students. The results are also consistent with a rigorous study indicating that the competitive pressures of performance-based EdChoice had a null or positive impact on Ohio’s traditional public schools during the program’s early years.[10]
As for the income-based EdChoice program, we noted above that the estimated impacts on districts are more suggestive than causal. A basic comparison of pre- and post-program outcomes indicates that, by 2019, districts more exposed to the program (by virtue of having higher poverty rates) experienced relative declines in enrollments; relative declines in racial and ethnic segregation; no statistically significant changes in total expenditures; relative increases in operational expenditures per pupil; relative increases in local revenue per pupil; and no statistically significant changes in student achievement. However, estimated changes in enrollments and segregation are highly sensitive to model specification and, thus, a causal interpretation of these findings is particularly suspect.
Overall, the analysis provides solid evidence that the performance-based EdChoice program led to racial integration, had no adverse effects on revenues per pupil, and increased student achievement in public school districts. It also reveals no credible evidence that the income-based EdChoice expansion harmed districts in terms of segregation, revenues, or student achievement. Thus, although EdChoice-eligible schools have indeed experienced increases in racial segregation and declines in achievement, these trends began well before 2006 and should not be attributed to private school vouchers. Instead, the analysis indicates that these long-term trends in district segregation and achievement would have been worse in the absence of the performance-based EdChoice program.
Introduction
School options have increased in Ohio and across the nation. Most prominent are public charter schools, which operate independently of school districts and now educate approximately 7 percent of Ohio students. Another prominent option is publicly funded scholarships (or “vouchers”) that families can use to send their children to participating private schools. Although involving fewer children than charter schools, participation in Ohio’s several voucher programs has risen significantly—particularly during the pandemic, when many families sought in-person schooling options for their children and found them more readily available in private schools.[11] Today, over 4 percent of school-aged children in Ohio participate in one of Ohio’s five voucher programs.
Three quarters of voucher recipients participate in EdChoice, which provides state funds to cover all or part of private school tuition for eligible students. Since the 2006–07 school year, the original EdChoice program—known as “performance-based EdChoice”—has provided scholarships to families whose children attend traditional public schools that the state deems to be poorly performing. And, since 2013–14, the EdChoice expansion program—known as “income-based EdChoice”—has provided scholarships to low-income families whose children do not qualify for the performance-based program. Today, the performance-based and income-based EdChoice programs account for approximately 50 percent and 25 percent, respectively, of total voucher participation in Ohio.[12]
Although there is significant public support for publicly funded vouchers,[13] rigorous evidence of their impact is mixed. Evaluations of the early implementation of urban voucher programs in Milwaukee and Washington, D.C., indicate small positive effects on program participants’ achievement (i.e., test scores) and attainment (i.e., level of education, such as graduation from high school or college attendance).[14] On the other hand, more recent evaluations of expanded voucher programs in Indiana, Louisiana, Ohio, and Washington, D.C., indicate significant negative impacts on participants’ test scores.[15] Beyond academic impacts on program participants, studies almost always indicate that parents who use vouchers report greater satisfaction with the quality and safety of their children’s schools.[16]
Criticisms of vouchers often center on their purported negative effect on traditional public schools. One contention is that vouchers siphon from school districts relatively high-achieving students whose families are more affluent and motivated, leaving behind concentrations of relatively disadvantaged, low-achieving, or minority students in traditional public schools. A related argument is that, because districts lose the state per-pupil funding tied to students who leave public schools with the help of vouchers, they have fewer resources to educate students who remain. Critics contend that these dynamics, in turn, lower the educational quality of public schools. The available research, however, suggests the opposite is true. There is rigorous evidence that voucher programs in Milwaukee and Florida led to improvements in the achievement of students who remained in traditional public schools, apparently because the competition that comes with the availability of vouchers incentivized traditional public schools to improve their performance.[17] An earlier Fordham-sponsored study also provides rigorous evidence that Ohio’s performance-based EdChoice program had no negative impact—and perhaps a positive impact—on the achievement of district students whose schools faced competitive pressure beginning in 2007–08.[18]
Although the Fordham study indicates that the performance-based EdChoice program did not harm the district schools that faced such competitive pressure during the program’s initial years, we do not know the district-wide and long-term impacts of the program on student achievement. Similarly, although the earlier study reveals that, among eligible students enrolled in district schools, participation rates differed substantially by family income levels and prior pupil achievement (but not by race), we do not know the impacts on district-wide enrollments and student segregation. As we discuss below, voucher programs may have impacts on district enrollments and finances that are more pronounced than one would expect based on the number and characteristics of voucher users. One reason is that districts’ enrollments and ability to raise local revenue via taxes are driven in part by their desirability as a place to live—something which voucher programs can affect in a variety of ways. To our knowledge, however, no rigorous evaluation of voucher programs—in Ohio or elsewhere—has estimated comprehensive, district-wide impacts of publicly funded vouchers on public school districts’ enrollments, finances, and educational outcomes.
The purpose of the present study is to estimate the effects of Ohio’s EdChoice programs on district enrollments, finances, and educational outcomes using a rigorous quasiexperimental approach and the best data available. We begin with a description and analysis of the performance-based EdChoice program—the larger of the two programs and the one for which our research design generates plausibly causal impact estimates. Then we describe and provide suggestive evidence of the impact of the income-based program. We conclude with key takeaways.
Performance-based EdChoice
The Ohio General Assembly enacted the performance-based EdChoice program in June 2005. At the start, it made available scholarships (vouchers) of up to $4,250 for elementary school students and $5,000 for high school students, provided that they had been assigned to schools considered poorly performing and did not reside in the Cleveland Metropolitan School District (whose students were already eligible for state-funded vouchers). Ohio law initially defined poorly performing schools as those with an “academic emergency” rating, Ohio’s lowest mark, for three consecutive years. Lawmakers relaxed those criteria in March 2006 and again in December 2006. These changes extended eligibility to students enrolled in district schools receiving “academic watch” ratings—the state’s second lowest—or worse in two of the previous three school years.
Although Ohio’s school-rating methodology and the districts with eligible schools (and thus eligible students) would change somewhat in subsequent years, the number of districts with EdChoice-eligible schools remained fairly stable from 2006–07 to 2018–19 (the last year of our analysis). Specifically, as Table 1 indicates, in each of those years about thirty districts had an average of seven to eight schools on the EdChoice eligibility list. But this apparent stability masks the number of districts exposed to the program, as some districts with few schools on the list—and, thus, few eligible students—rolled on and off the list from year to year. Indeed, by the 2015–16 school year, forty-seven districts had at least one school on the list for at least one year. The increased pool of EdChoice-exposed districts led program participation to grow from 3,071 in its first school year of operation (2006–07) to 23,482 during the 2018–19 school year.
Table 1. Districts with eligible schools and performance-based EdChoice participation
The performance-based EdChoice program’s focus on the state’s lowest-rated schools meant that districts exposed to the program served largely disadvantaged students. Table 2, below, compares the forty-seven districts exposed to the program (at any time from 2006–07 through 2018–19) to the average Ohio district. Just prior to the program’s start, during the 2004–05 school year, students in districts that became exposed to EdChoice had been disproportionately economically disadvantaged (53 percent of district students qualified for free or reduced-price lunches (FRL) versus 23 percent in districts without EdChoice-eligible buildings); Black (34 percent versus 3 percent); and Hispanic (3 versus 1 percent).[19] Predictably, exposed districts were also the lowest performing. The average student in an EdChoice-exposed district was in a district that was over two standard deviations below the statewide average (approximately the second percentile among all Ohio districts) on the Ohio performance index, a composite measure that aggregates student performance on state tests across all subjects and grades.
Table 2 also reveals that districts exposed to the performance-based EdChoice program between 2006–07 and 2018–19 were large, with average enrollments in 2004–05 of almost 8,000 students (versus 2,500 in other districts) and which spent more per pupil ($14,500 per pupil in 2019 dollars versus $11,500) despite performing worse academically.[20] They also had a higher concentration of private schools prior to the program’s implementation: approximately twelve private schools within five miles of the average district school, as opposed to four in districts not exposed to EdChoice. Thus, when about one-third of students in the average EdChoice-exposed district suddenly qualified for the program, there were likely private schools for them to attend. Indeed, in some supplementary analyses, we found that the count of private schools near district schools prior to the program’s implementation explained about 80 percent of program participation in 2019. In other words, the significant presence of private schools in these districts before the performance-based EdChoice program’s start made it highly likely that these districts would immediately feel competitive pressure when one or more of their schools made it onto the list.
Table 2. Comparing districts exposed to performance-based EdChoice to those not exposed
Analytic strategy
To estimate the causal impact of performance-based EdChoice, it is tempting to compare the enrollments, finances, and academic outcomes of districts that were exposed to EdChoice to districts never exposed to the program. Unfortunately, such a comparison can be misleading, as these districts differed in a number of ways. Indeed, as we show in Appendix A, districts exposed to EdChoice—compared to districts that were never exposed—were experiencing relative declines in enrollments, relative increases in the share and isolation of Black and Hispanic students, relative increases in expenditures, and relative declines in student achievement. A naïve comparison of EdChoice and non-EdChoice districts, therefore, might falsely attribute to the program the continuation of such differential trends after the program’s introduction, when in fact they were already occurring before EdChoice commenced.[21]
To address this problem, our analysis focuses on the forty-seven districts that had at least one building on the EdChoice list between the 2006–07 and 2018–19 school years. These districts are more similar to one another—after all, they all had at least one school that rated sufficiently poorly to qualify for the program. It is more plausible, therefore, that these districts would have had comparable trends in outcomes in the absence of the program. Indeed, as we show in Appendix A, these districts generally had comparable trends in enrollments, finances, and achievement in the years leading up to the program—regardless of the number of district students who qualified for EdChoice—which gives us some confidence that trends would have continued to be similar in the absence of the program.
Specifically, we estimate the impact of performance-based EdChoice by comparing changes in district outcomes—changes from before the program was in place to the 2018–19 school year—between districts that had higher as opposed to lower levels of exposure to the program. Among the forty-seven districts that had schools on the EdChoice list, the average district had 29.54 percent of its students enrolled in eligible schools during their first year. Exposure ranged from districts having 2.6 percent to 100 percent of their students eligible in that first year. Districts with low initial exposure sometimes exited the list in subsequent years, whereas those with a higher percentage of eligible students tended to stay on across all years. Thus, these forty-seven districts capture a wide range of exposure to the program.
Our measure of exposure is based on initial program exposure because enrollment counts after 2005–06 are themselves influenced by the program. For example, if students exit schools eligible for EdChoice, future program exposure would appear lower than it is. Thus, for the thirty-three districts with students participating in the program during the 2006–07 school year (the first year of the program), our measure of program exposure for all years is based on the enrollment data from the 2005–06 school year. We do the same for districts that were first exposed in subsequent years. For example, if a district first had participating students during the 2014–15 school year, we calculated exposure based on enrollment data during the 2013–14 school year.[22]
This method effectively entails calculating changes in outcomes between the 2004–05 and 2018–19 school years for each of the forty-seven districts exposed to the program, then comparing those changes in outcomes between districts with different levels of exposure to the program (based on the exposure measure we describe above).[23] We use the 2004–05 school year as the baseline because it predates the enactment of the program in June 2005. We focus on effects through 2018–19 because it was the last school year for which we had values for all enrollment, finance, and achievement outcomes. The 2018–19 school year also turned out to be a good terminal year because of significant changes in EdChoice exposure in 2020, as well as the pandemic’s multiple effects on district enrollments, finances, and testing.
The raw estimates from our statistical models indicate how much one additional percentage-point increase in exposure to EdChoice affected outcomes up to thirteen years later, depending on when a district was first exposed. For ease of interpretation, the figures below report effects as percent changes in an outcome (compared to the 2004–05 baseline) for the average district with an EdChoice-eligible school. We encourage readers to consult Appendix A to review the raw statistical results as well as the results of validity and sensitivity tests. Notably, as we mention above, these tests reveal minimal differences in trends in the years leading up to the program (1998–99 through 2004–05) between districts that would ultimately have high as opposed to low levels of exposure to the program after 2005. It is these additional analyses that lead us to treat our estimates as capturing the plausibly causal impacts of district exposure to the performance-based EdChoice program.
In the following subsections, we review our primary estimates of the impact of performance-based EdChoice on district enrollments, racial and ethnic segregation, finances, and achievement. The enrollment, spending, and revenue data are from the websites of the National Center for Education Statistics and the Urban Institute. The tax and property valuation data are from the Ohio Department of Taxation website and the referendum data are from the lead author.[24] Financial variables are expressed in 2019 dollars and were adjusted for inflation. The performance index since 2005–06 is from the Ohio Department of Education’s website. The performance index from earlier school years comes from archived records related to the No Child Left Behind accountability system.
Enrollments
The performance-based EdChoice program could have affected district enrollments in multiple ways. The most direct way was through program participation, as any district student who used a voucher to attend a private school was no longer enrolled in that district. If that were the sole mechanism, district enrollments would have declined by one student for every voucher participant who resided in that district. But the program could also have affected district enrollments in other ways. For example, if publicizing the list of schools eligible for EdChoice led families to perceive district schools as poorly performing, then they may have taken advantage of other schooling options (e.g., charter schools) or moved to another district. Indeed, families may have reacted this way even if their children were attending district schools that were not on the list, and families living outside the district may have been less likely to move to the district. On the other hand, if EdChoice led to improvements in the performance of district schools (e.g., through competitive pressures), then families might have become attracted to the district and enrollments might have increased. Our research design should capture all such downstream consequences.
Indeed, the results of our analysis indicate that performance-based EdChoice affected district enrollments more than actual program participation counts suggest. As Figure 1 (below) shows, the average district with EdChoice-eligible schools between 2006–07 and 2018–19 experienced a relative decline in enrollment of 14 percent as a result of the program. As we show in the appendixes, the exact estimate is somewhat sensitive to model specification, but it is fair to say that the average district participating in the performance-based EdChoice program had approximately 10–15 percent fewer students in 2018–19 than it would have had in the absence of the program. Across all EdChoice-exposed districts statewide, that estimate implies a relative decline of approximately 35,000 to 55,000 students. Actual program participation in 2018–19 was 23,482 (see Table 1). Thus, even the lower end of our estimates implies that districts with EdChoice-eligible buildings experienced relative declines that exceeded what one would have expected based on actual program participation.
Figure 1 also indicates that these relative declines in enrollment are driven primarily by declines in the enrollment of Black students. We estimate that EdChoice led to a relative decline in district-enrolled Black students of approximately 20 percent. For the average EdChoice district, that equates to the share of Black students enrolled in its schools shrinking from approximately 34 percent to approximately 27 percent. It is understandable that Black students would take greater advantage of vouchers, as EdChoice-eligible schools served mostly Black students. But these results are nonetheless inconsistent with what one would have expected based on actual program participation. Though our results indicate that nearly 100 percent of relative declines in district enrollment are driven by Black students, 43 percent of official program participants in 2019 were Black, whereas 35 percent were White.
One possibility is that White families who participated in the program were more likely to have sent their kids to private schools even in the absence of the program. In other words, data on voucher recipients might include White students who would not have attended district schools even in the absence of the program, so their participation in the program would not have affected the counts of White students in the district (as per the results in Figure 1). Indeed, these White students’ families might have moved to the district—or stayed in the district—because vouchers were an option. On the other hand, it may be that Black EdChoice participants would have attended district schools, such that their participation in the program affected district enrollments (as per the results in Figure 1). And, as we note above, a signal of poor district performance (the EdChoice school eligibility list) might have led families of Black students to send their children elsewhere (such as charter schools), which would have led to an even more pronounced decline in the share of district students who are Black than program participation implies. Thus, one way to reconcile differences in the race and ethnicity of EdChoice participants with changes in the racial and ethnic composition of district students is that White program participants might have been less likely to attend district schools in the first place (these students were never districts’ to lose), and Black students may have been more likely to exit district schools for other educational options such as private and charter schools.
Figure 1. Percent changes in district enrollments by student race
It is important to note that the estimates for White and Hispanic students are not statistically significant, but we cannot rule out substantively significant increases or declines in their enrollments because our estimates are imprecise. Similarly, we cannot rule out that relative declines in enrollments match participation counts exactly. That is because there is random error in statistical estimates (e.g., margins of error in public opinion polls). What we can say is that the estimated enrollment declines—overall and for Black students in particular—are unlikely to be due to chance. We can also say that our best estimate is that the share of Black district students was lower (and the share of White district students higher) than one would have expected based on the demographics of program participants.
Segregation
According to 2018–19 ODE data, students participating in the performance-based EdChoice program were 43 percent Black, 35 percent White, 12 percent Hispanic, 2 percent Asian American, and 1 percent American Indian or Alaskan Native. Thus, statewide, approximately 56 percent of voucher participants were Black, Hispanic, or American Indian or Alaskan Native, which are the minority racial and ethnic groups (“minority students” from now on) on which researchers focus when assessing the extent to which schools are segregated. Put differently, these data indicate that private school vouchers disproportionately go to minority students. However, as we discuss above, such data on voucher participants do not provide a good approximation of how performance-based EdChoice affected district enrollments—let alone how students were distributed across schools in those districts. The purpose of this analysis is to provide a valid estimate of performance-based EdChoice’s impact on the segregation of minority students in district schools.
We estimated the impact of performance-based EdChoice on racial and ethnic segregation among district schools using the same research design we used for enrollments, this time comparing changes in segregation (from 2004–05 to 2018–19) between districts with different levels of initial exposure to the program. Another difference between this analysis and the enrollment analysis is that we now weight statistical models by the number of students in a district, as that allows us to characterize the average level of segregation students experienced across the forty-seven EdChoice-eligible school districts.[25] The intuition is that the extent to which a given district’s segregation figures into our estimates should be proportional to the number of students experiencing that segregation.[26]
There are commonly accepted ways of measuring student segregation across district schools.[27] To simplify the exposition, we focus on one of these measures: the isolation of minority students in particular schools. Specifically, our segregation measure captures the share of minority students in the average minority student’s school.[28] Another way to think about it is that the measure captures the probability that a minority student will be exposed to other minority students, as opposed to nonminority students. In the appendix, we also present the results using other measures of segregation prominent in the academic literature. We describe those measures in Table C1 of Appendix C. As we show in Figure C1 in that appendix, those other measures yield similar estimates, which further validates our main findings and empirical design.[29]
Before we turn to EdChoice’s impact on minority student isolation, Figure 2 (below) reveals that the average student whose district was exposed to the program experienced a relative decline of approximately 13 percent in the district-wide share of students who identify as a racial or ethnic minority (Black, Hispanic, or Native American). During the 2004-05 school year, the average student was in a district in which 49 percent of students identified as a minority (see Table C1 in Appendix C). [30] A 13 percent decline is equivalent to going from 49 percent to 43 percent. Thus, consistent with our enrollment results above, for the average student in our sample, EdChoice led to a decline of six percentage points in the share of minority students districtwide.
Turning to our measure of school-level district segregation, Figure 2 indicates that EdChoice led to a decline of 12 percent in the isolation of minority students. During the 2004–05 school year, the average minority student was in a school in which 57 percent of students identified as a minority (see Table C1 in Appendix C). A 12 percent decline is equivalent to going from 57 percent to 50 percent. Thus, it appears that the relative reduction in the district-level share of minority students indeed led to a comparable reduction in the school-level isolation of minority students. Put differently, the estimates indicate that because of EdChoice, Black, Hispanic, and Native American students went to schools with a greater share of White and Asian American students than they would have in the absence of EdChoice. As we note above and show in Appendix C, other measures of segregation yield similar effect sizes.[31]
Figure 2. Percent change in the share and isolation of minority students
Finances
The EdChoice program is funded by the state, and state per-pupil funding is what districts stand to lose when a student exits their schools to participate in the voucher program. But districts with EdChoice students retain all the revenue they generate locally via property and other taxes.[32] As Table 2 shows, as of the 2004–05 school year, local revenue accounted for approximately 40 percent of total revenue among districts exposed to performance-based EdChoice between 2006–07 and 2018–19. Thus, simple logic suggests that a decline in enrollment due to program participation would lead to an increase in per-pupil district revenue that is easy to calculate, since districts would only lose the state share for each student. Although that intuition ultimately proves useful, the underlying reality is more complicated.
One reason this intuition is not quite right is that the direct cost to individual districts from having students participate in the program depended on how much state money they received in complicated state formulas that changed often between 2006–07 and 2018–19.[33] Another reason is that, as we show in the enrollment analysis, the relative enrollment declines associated with the program appear to be greater than program participation numbers. Finally, a simple accounting of voucher costs and total enrollment effects (like those we document above) does not account for the EdChoice program’s impact on districts’ ability to bring in local revenue. For example, what if the annual release of the list of EdChoice-eligible schools reminded district residents of the poor performance of their schools and led them to vote down district tax referenda? What if the availability of vouchers meant that districts attracted more affluent families who pay higher taxes? Once again, the possibilities are many, and our research design is meant to take into account the totality of the performance-based EdChoice program’s impact on district finances.
Nonetheless, the results in Figure 3 (below) tell a basic overall story that is remarkably consistent with intuition regarding EdChoice’s impact on district finances. The figure reveals that, for the average district exposed to performance-based EdChoice, the program led to a decline of approximately 13 percent in total expenditures—expenditures not reported on a per-pupil basis. However, on a per-pupil basis, we estimate that performance-based EdChoice led to an increase in total expenditures per pupil of 1.39 percent and an increase in operating (noncapital) expenditures of 4.55 percent per pupil. These last two results do not attain conventional levels of statistical significance—though the estimate for operating expenditures comes close in some specifications—but they do enable us to essentially rule out a long-term decline in per pupil expenditures. The estimates in the appendix also reveal statistically significant increases in current expenditures per pupil in the short term, just a few years after the program’s introduction (see Figure A4 in Appendix A). This finding may be due to budgetary practices during initial implementation that were meant to compensate districts for potential disruptions as they adjusted to lower enrollments.[34]
Figure 3. Percent changes in district expenditures and local revenue
Figure 3 also indicates a statistically significant 12 percent increase in local revenue per pupil. In other words, the voucher-related decline in enrollment led to a proportional increase in locally generated district revenue per pupil. That suggests that there was no decrease in EdChoice-exposed districts’ ability to generate local revenue. Indeed, as we show in additional analyses in the appendixes, there are no statistically significant estimated impacts on district property values, local property tax rates, local property tax revenues, levy passage rates, or state-defined fiscal stress designations (see Table A4 in Appendix A). Indeed, the coefficients—though not statistically significant—point toward districts potentially having higher property values, a higher referendum passage rate, and a lower probability of receiving a fiscal stress designation. Thus, the intuition works: declines in enrollments led to increases in revenue per pupil because EdChoice had no negative impact on districts’ ability to raise local revenue.
Achievement
Publicly available data enable us to estimate the impact of performance-based EdChoice on the academic achievement of district students. Specifically, we use Ohio’s “district performance index,” which combines into a single measure the proficiency level of district students across all subjects and tested grades. Because the tests that factor into the index changed significantly over time, we standardized the measure by year so that it captures how many standard deviations away an EdChoice district is from the statewide district average for a given year. Finally, like the segregation analysis, we weighted the statistical models so that we are capturing effects for the average student in a district exposed to performance-based EdChoice.
Figure 4. Impact on student achievement (district-level standard deviations)
Figure 4 reveals that exposure to EdChoice led to increases in academic achievement in the average student’s district. Specifically, the average student in an EdChoice district experienced an increase in districtwide achievement of half of a standard deviation. That means that the academic achievement of their district went from the second percentile (approximately the twelfth-lowest-achieving Ohio district) to the sixth percentile (approximately the thirty-seventh-lowest-achieving Ohio district).[35] This effect is consistent with the program drawing disproportionately economically disadvantaged students who tend to be low achieving, thus increasing average district proficiency rates.[36] The results are also consistent with a rigorous study indicating that the competitive pressures of Ohio’s performance-based EdChoice have had a null or positive impact on Ohio’s traditional public schools in the program’s early years.[37]
Unfortunately, we are unable to determine how much the positive effect is due to student learning gains as opposed to changes in student composition, as Ohio’s district value-added estimates are unavailable for our baseline years and we were unable to obtain student-level data from ODE in time for this report. It is also important to emphasize that, despite the positive achievement effects, districts exposed to EdChoice remained exceptionally low achieving. Thus, these results do not suggest that EdChoice is a particularly effective intervention for improving the performance of traditional public schools.
Income-based EdChoice
The Ohio General Assembly expanded EdChoice in 2013 when they enacted the income-based program. Beginning during 2013–14, scholarships became available to students whose families had incomes at or below 200 percent of the federal poverty line and who were not in schools served by the performance-based EdChoice program. The income-based program phased in such that Kindergarten students were eligible in that first year, first graders were eligible in 2014–15, second graders in 2015–16, and so on. Students whose family incomes rose above the 200 percent poverty threshold (up to 400 percent) could continue in the program, but scholarship amounts were reduced depending on how far above the threshold family income increased.[38] Based on available federal data, we estimate that at least 603 of the 609 districts in our dataset had students that qualified for the program.[39]
Table 3. Districts with eligible students and income-based EdChoice participation
Table 3 shows that very few students participated in the income-based EdChoice program during this period, given the fact that there were eligible students in over 600 districts and that over 40 percent of those in qualifying grades were eligible on the basis of family income.[40] In Table 4, below, we compare districts most and least exposed to the program by identifying the 562 districts that were not eligible for the performance-based EdChoice program (our primary analytic sample) and, among those districts, comparing those that had shares of FRL-eligible students that were above and below the statewide median in 2012. This is our best approximation of relative program eligibility across districts because, in 2012, the FRL-eligibility criterion was that a student’s household was at or below 185 percent of the poverty threshold (slightly below the voucher eligibility income threshold).
Table 4 reveals that districts with higher exposure to income-based EdChoice—those with relatively more low-income pupils in 2012—had somewhat lower enrollments (2,075 instead of 2,561), slightly lower spending per pupil (12,028 as opposed to 12,465), and substantially lower achievement levels (0.54 standard deviations below the state average, as compared to 0.72 standard deviations above the state average on the performance index). However, annual student learning gains, racial composition, and private school counts were very similar in both groups of districts. In other words, districts with more students eligible for income-based EdChoice were comparable to those with fewer eligible students when it came to district educational effectiveness, racial composition, and the extent to which the program was likely to impose competitive pressure.
Table 4. Comparing districts with high vs. low exposure to income-based EdChoice
Analytic strategy
Our empirical strategy parallels the strategy for the performance-based EdChoice program. It first involves determining districts’ exposure to the policy by calculating the percent of district students who qualified for the scholarship just prior to the program’s start. Doing so is more complicated in the case of the income-based program, however. First, students eligible for the performance-based EdChoice scholarship (those in low-performing schools) were not eligible for the income-based program. To address this, we estimated models that exclude the forty-seven districts exposed to performance-based EdChoice between 2007 and 2019.[41] A second challenge is that we have no data source that identifies the proportion of a district’s enrollment that was eligible for an income-based voucher. As we note above, we use as a proxy the percent of district students who qualified for FRL in 2012, which was based on whether a household was at or below 185 percent of the federal poverty threshold.[42] Thus, FRL counts from the 2012 school year should be a close approximation of the count of students near or below 200 percent of the poverty line. For our analytic sample of 562 districts, our estimate of the percent of district students eligible for income-based EdChoice (our measure of program exposure) ranges from 0 to 98 percent.
We again emphasize that the measure of program exposure is based on district features prior to the program’s start. As we explain in our discussion of the performance-based EdChoice analysis, the purpose of using a preprogram measure of exposure is to capture the full breadth of the program’s impact. It is also important to note that although the income-based EdChoice program was phased in and a district’s true exposure was limited to students in eligible grades (e.g., only low-income Kindergarteners in 2013–14), the percent of FRL-eligible students in 2012 should still roughly capture the relative exposure of school districts to the income-based EdChoice program for every subsequent year. Thus, our empirical strategy is to compare changes in outcomes—between 2012–13 and 2018–19—between districts with different levels of exposure as measured by preprogram FRL eligibility.
Unfortunately, unlike our analysis of the performance-based EdChoice program, this design does not yield plausibly causal estimates of the program’s impact. In particular, as we show in the appendixes, districts with higher (as opposed to lower) FRL rates in 2012 were experiencing relative declines in enrollments, racial isolation of minority students, and total expenditures, as well as relative increases in student achievement, in the school years leading up to 2012–13 (see Figure B1 in Appendix B). Thus, unlike the performance-based EdChoice analysis, it seems unreasonable to claim that trends in outcomes between 2013–14 and 2018–19 would have been similar for districts with high and low exposure had the program not been in place. Other attempts to get at a credibly causal design—such as comparing observationally similar districts with higher and lower densities of private schools prior to the program—similarly failed basic validity checks.[43] Thus, although we can characterize the results below as capturing differences in trends between districts that had more as opposed to fewer eligible students, we cannot characterize these relative differences as resulting from the income-based EdChoice program.
Finally, let us note a couple of other limitations of this analysis. First, the exceedingly low participation rate for the average Ohio district (as implied by Table 3 above) makes it unlikely that we would have the statistical power to detect program effects even with a valid design.[44] Second, the time horizon is limited. The program was still being phased in as of 2018–19, when students in grades K–5 were eligible. That means the program’s impact on districts could not approach a relatively steady state by 2018–19, as the performance-based EdChoice program had. That the program was still phasing in also precludes our pooling outcomes for the 2016–17 through 2017–19 school years to enhance statistical power, as we did for the analysis of performance-based EdChoice. Thus, even if the analytic strategy we employ were to generate plausibly causal estimates of income-based EdChoice’s impact, it seems unlikely that those effects would be sufficiently large to be statistically distinguishable from null effects.
To reiterate, our analysis focuses on comparing changes in outcomes between the 2012–13 and 2018–19 school years, depending on a district’s approximate exposure to income-based EdChoice—which, as we note above, ranges from roughly 0 to 98 percent of district students being eligible for the program. Consequently, the raw estimates from our statistical models (which appear in Appendix B) indicate, roughly, how much one additional percentage-point increase in exposure to income-based EdChoice corresponded to outcomes in 2018–19. Once again, for ease of interpretation, three of the four figures below report estimates as percent changes (compared to 2012–13) for the average district.
Relative changes in outcomes associated with exposure to income-based EdChoice
Figure 5 (below) reports relative changes in outcomes for districts with average levels of exposure to income-based EdChoice. The outcome measures are the same as those in the analysis of performance-based EdChoice except we now also have Ohio’s district “value-added” measure, which captures annual gains in student achievement on mathematics and English language arts exams. Additionally, because districts with higher (as opposed to lower) exposure to the program had significantly different trends in the years leading up to the program, the figures capture the sensitivity of estimates when we control for differences in prior trends.[45] To reflect such sensitivity in our estimates, Figure 5 reports solid bars if an estimate is statistically significant and is not sensitive to controlling for prior trends.[46]
The results in Figure 5a reveal that districts with average exposure to the income-based EdChoice program experienced, on average, relative declines in enrollment of 5.54 percent (since 2012–13) and that those declines affected all racial and ethnic subgroups. However, when one accounts for differences in district-specific preprogram trends, one sees relative increases in enrollments overall and among Asian American and White students. Consequently, the only two estimates that are represented with solid bars are enrollment declines among Black and Hispanic students, whose relative enrollment declines are not merely a continuation of trends from the years leading up to establishing the program in 2013. The relative decline of 4.25 percent in the share of minority students (reported in Figure 5b), although statistically insignificant, seems to confirm this result. Indeed, as we show in the appendixes, supplementary analyses suggest that increased exposure to income-based EdChoice is associated with a decline in the share of minority students in a district—that is, a decline in the share of district students who are Black, Hispanic, or American Indian or Alaskan Native.
The school-level segregation estimates are perhaps least informative of all. The main estimate of minority isolation indicates a relative decline in segregation, but, as we show in the appendixes, accounting for district-specific preprogram trends suggests a relative increase in the isolation of minority students across district schools. Consequently, to communicate the weakness in this finding, the estimate in Figure 5b is denoted with a star and the bar is empty. This instability and statistical insignificance, which characterize results for all segregation measures (e.g., see results in Appendix B) are unsurprising. The share of district students who are Black, Hispanic, or Native American—as well as the isolation of these students in district schools—are very low in these 562 districts. For example, whereas the average student in a district exposed to performance-based EdChoice was in a district where 49 percent of students were racial or ethnic minorities, that figure was only 10 percent in the 562 districts in this analysis of income-based EdChoice. Thus, there is very little variation, and segregation measures might be problematic.[47]
The most robust results are those related to district expenditures and revenues (Figure 5c). Statistical models consistently indicate a positive association between our measure of exposure to income-based EdChoice and current expenditures per pupil (a relative increase of 3.54 percent) and local revenue per pupil (a relative increase of 5.62 percent).[48] If there is one takeaway from the income-based analysis on which we might put some weight, it is that there is no credible evidence of revenue or expenditure declines. All credible evidence points in the opposite direction.
Finally, estimates of district achievement levels and annual “value-added” achievement gains (Figure 5d) are statistically insignificant across all statistical models. Unfortunately, these estimates are too imprecise to rule out achievement effects that researchers would consider to be substantively meaningful. As we note above, the limited take-up of the income-based program and our inability to obtain student-level data prevents us from conducting an analysis sufficiently precise to reveal the income-based EdChoice program’s impact on student achievement.
Figure 5. Relative changes in enrollments, segregation, finances, and achievement
Conclusion
The analysis indicates that the performance-based EdChoice program led to lower levels of segregation among minority students, no change (and perhaps an increase) in district expenditures per pupil, no change in districts’ ability to generate local revenue (leading to an increase in local revenue per pupil), and higher academic achievement among the remaining district students. The analysis of the income-based EdChoice expansion program was mostly inconclusive. What one can say, however, is that an analysis of publicly available data yields no credible evidence that the income-based EdChoice expansion had a negative impact on district revenues, segregation, or achievement. In future work, we hope to use student-level data to get more credible impact estimates of the income-based program, as well as estimates of the performance-based program’s long-term impact on student learning.
Between 2006 and 2019, EdChoice-eligible schools did indeed experience increases in racial and ethnic segregation, as well as declines in achievement (see Figure D1 in Appendix D). These trends had begun well before 2006, however, and this study provides compelling evidence that they should not be attributed to private school vouchers. In fact, the analysis indicates that district segregation and achievement would have been worse in the absence of the performance-based EdChoice program.
Acknowledgments
We thank David Figlio, Brian Kisida, and Vlad Kogan for helpful reviews of an earlier draft. We also thank Joshua Sadvari at The Ohio State University library for helping us geolocate schools. Last but not least, we thank the Thomas B. Fordham Institute—particularly Aaron Churchill, Chad Aldis, Chester E. Finn, Jr., and Michael J. Petrilli—for making this project possible, for offering their expertise on Ohio education policies, and for helping us make this report intelligible.
– Stéphane Lavertu and John J. Gregg
We offer our deepest thanks to Stéphane Lavertu and John J. Gregg for their detailed, rigorous analysis of Ohio’s EdChoice voucher programs. On the Fordham team, we wish to thank Michael J. Petrilli and Chester E. Finn, Jr. for their thoughtful feedback during the drafting process; we also thank our Columbus colleague Jeff Murray for assisting with report dissemination. We express our gratitude to Dr. David Figlio of the University of Rochester and Dr. Brian Kisida of the University of Missouri for their feedback on the report. Last, special thanks to Pamela Tatz who copyedited the manuscript and Andy Kittles who created the design.
– Aaron Churchill and Chad L. Aldis, Thomas B. Fordham Institute
[7] Information about the case—Columbus City School District, et. al. vs. State of Ohio—is available on the Franklin County court’s website.
[8] Eligibility for performance-based EdChoice hinges on the academic performance of the district school to which a student is assigned. In 2021–22, 36,824 students participated in this program.
[9] In 2021–22, 20,783 students participated in this program.
[10] Figlio and Karbownik, Evaluation of Ohio’s EdChoice Scholarship Program.
[11] EdChoice programs experienced large increases in participation rates between 2019 and 2021, though some of this is attributable to the expansion of eligibility in 2019. Evidence of parents’ exit from public schools—particularly when districts offered only remote instruction—is available here: Tareena Musaddiq, Kevin Stange, Andrew Bacher-Hicks, and Joshua Goodman, “The Pandemic’s effect on demand for public schools, homeschooling, and private schools,” Journal of Public Economics 212, no. 104710 (August 2022). Evidence that private schools were more likely to stay open during the pandemic is available here: Michael B. Henderson, Paul E. Peterson, and Martin R. West, “Pandemic Parent Survey Finds Perverse Pattern: Students Are More Likely to Be Attending School in Person Where Covid Is Spreading More Rapidly,” Education Next 21, no. 2 (Spring 2021).
[12] The remainder participate in the Jon Peterson Special Needs scholarship program, Autism Scholarship Program, and Cleveland Scholarship Program.
[13] According to Education Next’s 2022 public opinion poll, around 50 percent of both Democrats and Republicans support universal school vouchers, whereas about 40 percent oppose them. Majorities of Black respondents (72 percent) and Hispanic respondents (58 percent) support them, whereas 44 percent of White respondents support them. See David M. Houston, Paul E. Peterson, and Martin R. West, “Partisan Rifts Widen, Perceptions of School Quality Decline: Results of the 2022 Education Next Survey of Public Opinion,” Education Next 22, no. 4 (Fall 2022).
[15] See Atila Abdulkadiroğlu, Parag A. Pathak, and Christopher R. Walters, “Free to Choose: Can School Choice Reduce Student Achievement?” American Economic Journal: Applied Economics 10, no. 1 (January 2018): 175–206; R. Joseph Waddington and Mark Berends, “Impact of the Indiana Choice Scholarship Program: Achievement Effects for Students in Upper Elementary and Middle School,” Journal of Policy Analysis and Management 37, no. 4 (2018): 783–808; Figlio and Karbownik, Evaluation of Ohio’s EdChoice Scholarship Program; Mark Dynarski et al., Evaluation of the DC Opportunity Scholarship Program: Impacts After Two Years (Washington, D.C.: Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, U.S. Department of Education, NCEE 2018-4010, May 2018).
[16] These studies are of uneven quality, but the results are consistent. For example, the relatively rigorous Milwaukee and Washington, D.C., evaluations cited above both found that participating families reported increased satisfaction and perceived improvements in school safety.
[18] Figlio and Karbownik, Evaluation of Ohio’s EdChoice Scholarship Program.
[19] We provide statistics from the 2004–05 school year because that is the last year prior to the program’s implementation, which consisted of a release of EdChoice-eligible schools during the 2005–06 school year. The 2004–05 school year serves as the appropriate baseline for our analysis because statistics from those years are not contaminated by the impact of the program.
[20] This report emphasizes financial expenditures, as opposed to revenues, because Ohio district revenue data misleadingly includes funds for charter schools. The revenue data we present is from locally generated revenue (i.e., revenues that districts raise by levying property, sales, or income taxes), as these revenues go solely to school districts in our analytic sample (which excludes the Cleveland school district).
[21] Additionally, because the high-poverty and low-achievement districts eligible for the program are so different in terms of observable characteristics, attempts to match EdChoice and non-EdChoice districts based on preprogram characteristics (such as spending, enrollments, charter school presence, achievement, sanctions from accountability systems, etc.) leads to very few well-matched districts.
[22] Program exposure for districts other than the original thirty-three could have been influenced by the program itself. Thus, focusing on the thirty-three districts initially exposed yields the best design. However, as we show in the appendix, the results are similar if we focus on those thirty-three districts. Thus, for completeness, we present estimates based on all districts with EdChoice-eligible schools.
[23] In fact, we average outcomes across 2017–19 in order to maximize statistical precision. We chose those years because all forty-seven districts were exposed by the 2016 school year. In a sensitivity analysis, we estimated models using 2019 only as the dependent variable, and the results were qualitatively similar. That is not surprising, as effect sizes often plateaued just a few years after districts were initially exposed, and a large majority of districts were exposed in 2007–08. Again, as we show in the appendix, the results are similar if we focus on districts treated in 2007.
[25] Another difference is that we use grade-level measures of segregation for each district, as using grade-level data ensures that we are looking at the segregation of students in the same grade across different district schools. Using grade-level data does not affect the main results, but it enables us to conduct a robustness check in which we limit the analysis to grades served by multiple schools (as we show in the appendixes, our results are generally robust to limiting the sample in this way, which enhances confidence in our estimates). Thus, when we discuss weighting by district enrollments, we actually mean that the analysis weights our models using district-by-grade enrollments.
[26] Doing so also helps us address the fact that our estimated effect sizes for small districts will be artificially small in magnitude and estimated with more error because there are few students and few schools across which we can observe segregation. This issue is more relevant for the other measures of segregation that we analyze in the appendixes. As we show in the appendixes, the results are similar when we use other methods for addressing estimation problems related to there being few students and little between-school variation.
[28] Racial segregation is also possible within schools, but we lack data to explore such a possibility. Students may also sort based on income. Because EdChoice targets low-achieving students, it may reduce socioeconomic segregation. However, the FRL eligibility of students is not a good proxy for poverty after 2013, when Ohio began to pilot the FRL community eligibility option.
[29] The results help validate our general empirical design because these are measures of relative segregation, which effectively hold constant changes in districtwide demographics. If demographic trends differed between districts with high and low exposure to EdChoice (which would be a violation of the assumptions of our research design), then these measures of relative segregation would yield different estimates than the isolation measure on which we focus.
[30] Note that 49 percent minority students is a greater share than the approximately 39 percent reported in Table 2. That is because the numbers in Table 2 capture the average share of minority students across EdChoice districts, no matter their size, whereas 49 percent captures the share in the average student’s district. In other words, EdChoice districts with larger enrollments tend to have a greater share of minority students.
[31] Estimates based on other measures do not attain conventional levels of statistical significance. However, that may be expected, as the calculations involve accounting for (noisy) calculations of White and Asian students’ relative exposure to minority students. That the estimates yield comparable effect sizes increases confidence in our main estimates based on less complicated calculations.
[32] Cleveland Metro is the only exception, as it shares some locally generated revenue with public charter schools, but Cleveland is not in our analytic sample because it has its own voucher program.
[34] For example, see Stewart and Moon, “Understanding How School Vouchers Are Funded.”
[35] Although 0.5 standard deviations might seem like a lot, EdChoice districts had such low-achieving students—that is, they were so far into the left tail of the achievement distribution—that the change in achievement translates to an apparently small change in a district’s percentile ranking.
[36] Figure A2 in Appendix A reveals that exposure to performance-based EdChoice indeed resulted in a decline in students who qualified for FRL until 2012—the last year before Ohio piloted the community eligibility option that rendered FRL status an invalid measure of economic disadvantage.
[37] Figlio and Karbownik, Evaluation of Ohio’s EdChoice Scholarship Program.
[38] Later, after the period of this analysis, Ohio lawmakers increased the eligibility threshold to 250 percent of the federal poverty line. Lawmakers also recently removed the income cap.
[39] We based this rough estimate on the rate of 2012 FRL eligibility across Ohio districts.
[40] It appears that program participation did not fully exhaust appropriation amounts set by the Ohio General Assembly, at least in early years. See Stewart and Moon, “Understanding How School Vouchers Are Funded.”
[41] Results that include all Ohio districts are included throughout Appendix B.
[42] We use 2012 FRL data because Ohio did not pilot the FRL community option until 2013.
[43] For example, see Figlio and Hart, “Competitive Effects of Means-Tested School Vouchers.”
[44] This problem is particularly acute for the analysis of achievement, as many voucher-eligible students would not have scores until third grade—which means their scores would affect our achievement measure beginning in 2016–17.
[45] Specifically, as we did for the performance-based program, we report in the appendixes the results of models that control for district-specific time trends (from 2006–07 through 2012–13). Whereas estimates for the performance-based program were generally insensitive to accounting for differences in preprogram trends, the estimates for the income-based program were often highly sensitive to controlling for district-specific preprogram time trends. As we note in our discussion above, the enrollment estimates were sensitive to model specifications for the performance-based EdChoice program. It is possible that the attenuating effect of controlling for district-specific time trends was due to the small sample size. Regardless of the precise cause, the coefficients still suggest qualitatively similar effects. That is not the case for the analysis of the income-based program.
[46] Specifically, a solid bar indicates the estimate is statistically significant and that controlling for district-specific trends yields a coefficient with the same sign. Statistically significant estimates for which the district-trend model had an opposite sign are reported as an empty bar and denoted with a star.
[48] Further analysis suggests that this may be due in part to slight increases in local property tax rates, though that result is only marginally statistically significant (see Table B4 in Appendix B).
The Thomas B. Fordham Institute and the American Enterprise Institute offer a unique program aimed at cultivating human capital within the education-policy sector. The Emerging Education Policy Scholars (EEPS) program brings newly minted Ph.D. scholars and Ph.D. candidates who bring to the table a keen research eye, fresh ideas, and boundless (or budding) enthusiasm for education policy to our nation’s capital to meet with education-policy experts and to share and brainstorm exciting new directions for K–12 education research. The program focuses on three over-arching goals: