A new study by Pat Wolf and a few of his graduate students is a formal meta-analysis of the impacts of voucher programs on math and reading achievement. It attempts to set the voucher record straight in the face of conflicting messages coming out of academia, think tanks, and the press.
The authors go through a litany of prior reviews of voucher achievement effects and deem them insufficient, primarily because they include less rigorous studies or omit relevant, rigorous studies. Moreover, they result in divergent conclusions, vacillating from no effect to positive effect to a mix.
Wolf’s meta-analysis, however, includes only experimental studies or randomized control trials—the “gold standard.” They include all such studies ever conducted on voucher programs (both inside and outside the United States) that focused on participant effects and measured test score outcomes in either math or reading, which they found primarily through a comprehensive search of library databases and Google Scholar. (Studies that used outcomes such as graduation rates and college attainment were excluded, as were those not published in English or with English translations.) Included programs could be publicly or privately funded, or funded indirectly via tax credit scholarships. Ultimately, nineteen studies representing eleven programs met these criteria—eight in the United States (including publications that focused on Milwaukee, Dayton, and New York City) and three in Colombia and India. These comprise a total of 262 effect sizes (which, in laymen’s terms, is the strength of the impact calculated on a common scale).
Wolf and colleagues find that voucher programs produce positive results overall, the magnitude of which vary by subject, location, and funding. They also find that the effects of voucher programs often start out null in the first year or two before turning positive in year four and after. Given these somewhat nuanced findings, it is best to simply repeat the authors’ bottom line:
Generally, the impacts of private school vouchers are larger for reading than for math. Impacts tend to be larger for programs outside the U.S. relative to those within the U.S. Impacts also generally are larger for publicly funded programs relative to privately funded programs.
This summary, however, might overstate the effects of U.S. programs. In reading, the cumulative impact of U.S. voucher programs is null. And in math, they’re rather modest (.07 SD) overall relative to the offer of a voucher (“intent to treat” impact). It should be noted, however, that the inclusion of Louisiana makes the overall “treatment on treated” impact (use of the voucher) for U.S. programs null as well.
In the end, this valuable meta-analysis leads to one big question: Why are American voucher programs being bested by their foreign counterparts?
SOURCE: M. Danish Shakeel, Kaitlin P. Anderson, and Patrick J. Wolf, "The Participant Effects of Private School Vouchers across the Globe: A Meta-Analytic and Systematic Review," EDRE (May 2016).