In the 2017–18 school year, 7,344 North Carolina K–12 students received a state-funded voucher to attend private school through the state’s $28 million Opportunity Scholarship program. Enacted in 2013 and launched in 2014, the program will enter its fifth year of operation in the fall of 2018, yet little is known about how it has impacted participating students' math and reading achievement. This is unusual for a program of this nature.

The legislation that created this program—North Carolina General Statute 115C-562—calls for an evaluation of the "learning gains or losses of students receiving scholarship grants," as compared to similar students in public schools, but there has been no official state-supported evaluation conducted to date. There are at least three major barriers to conducting a rigorous evaluation.

First, there is an outcome-data barrier. The accountability provisions built into the program require participating private schools to administer an annual assessment to voucher-receiving students and to share student test scores with the state. At first glance, this provision sounds promising, as it could allow for the collection of data that researchers might leverage to learn more about the program's academic impact. Unfortunately, private schools are permitted to administer any nationally norm-referenced standardized test of their choosing, whereas comparable public school students take the state's criterion-referenced tests—the End of Grade and End of Course examinations. As a result, there is no common metric on which to compare student outcomes, either within the private school sector or between the private and public school sectors.

The second barrier relates to participation. North Carolina’s program does not require participating private schools to cooperate with an evaluation of the program. Researchers must negotiate the cooperation of school leaders, requesting their assistance with any evaluation work, one school at a time. The same evaluation recruitment challenge arises at the student level. North Carolina's Opportunity Scholarship recipients are not required to participate in any new data collection efforts by a research team attempting to evaluate the program. This means researchers also must negotiate student participation one student at a time, ruling out any hope of achieving a representative evaluation sample of voucher users across the state. 

Third, there is a funding barrier. The state hasn’t provided a specific evaluation budget. Instead, funding provided to the State Education Assistance Authority for administration of the program must be used to maximize the number of students who receive vouchers, cover the costs of running a rapidly expanding program, and fund an evaluation. If a rigorous, state-mandated evaluation of program impact is to be conducted, then the state will need to provide adequate support for that evaluation.

In the spring of 2017, we, along with our NC State University colleague Stephen Porter, coordinated with a diverse set of public and private school partners to conduct a pilot evaluation that demonstrates the best possible evaluation that currently can be conducted, given these significant barriers. We recruited 698 student volunteers to take a common third-party assessment. We visited thirty-eight public and private schools to collect student test score data, then merged these records with demographic information and prior testing files from the North Carolina Department of Public Instruction to conduct a quasi-experimental "matching" analysis. This strategy allows us to compare the performance of voucher-receiving students to that of statistically similar students in public schools in the same regions of the state. Our analysis demonstrates large, statistically significant academic gains for first-time voucher recipients in mathematics and language. Our matching strategy mitigates many sources of potential bias, but our findings should be interpreted with caution because the private and public school students in this sample were not randomly selected from a pool that included all eligible students statewide. As a result, these impacts may not be representative of the experience of the average scholarship user, which limits the generalizability of the results.

If the intent of the North Carolina Opportunity Scholarship program is primarily to expand school choice options, then that goal has been accomplished, as enrollment has risen steadily from 1,216 students in 2014–15 to 7,344 in 2017–18. But if that goal includes the improvement of academic outcomes for participating students, then impacts have to be measured, which requires removing the barriers that prevent researchers from conducting a high-quality and comprehensive program evaluation.

The views expressed herein represent the opinions of the author and not necessarily the Thomas B. Fordham Institute.

Anna J. Egalite is an assistant professor in the Department of Educational Leadership, Policy, and Human Development at North Carolina State University.

Trip Stallings is the Director of Policy Research at the William and Ida Friday Institute for Educational Innovation at North Carolina State University.