Twenty-five years into charter schooling, there’s no shortage of horse-race research on whether charters or district schools are pulling ahead. But regrettably little attention has been paid to what it takes for charters to get out of the starting gate. Last year’s report by the Education Research Alliance for New Orleans broke some useful ground on that subject, but was focused solely on the record in one quite idiosyncratic community.
So it’s welcome news that Fordham has taken a vital next step by expanding the scope of the inquiry to four large chartering states, and by widening the kinds of questions asked. And I’m pleased to report that its new study, Three Signs That a Proposed Charter School Is at Risk of Failing, was written by my former colleague Anna Nicotera, along with another estimable researcher David Stuit, both now at Basis Policy Research.
Once past the somewhat lugubrious title, the reader will find lots to think about. The authors go to an obvious but underexplored source of data—charter applications. After poring over more than six hundred apps from Texas, Colorado, Indiana, and North Carolina to identify common characteristics, they conclude that three indicators correlate with likelihood of low academic proficiency and slow growth in a charter’s first two years: No school leader is identified in the application; the school targets a high-risk population but proposes a “low-dose” program; or the proposed charter will featured a child-centered curriculum.
The first two findings make intuitive sense, while the last is likely to set teeth grinding among those who value charters for promoting innovation. But the authors acknowledge that the world of Montessori/Waldorf/Paideia is an awkward fit for test-driven state accountability systems. And far from recommending automatic rejection, they advise authorizers to “carefully review applications for child-centered, inquiry-based models to determine if there is evidence that teachers will be highly trained and that the proposed school has a detailed plan to ensure that grade-level standards are covered.” They add that authorizers “may want to consider developing rigorous, mission-specific performance measures.”
Actually, this is just one of the “outlier” issues that authorizers are now trying to address in more systematic ways, whether it’s sorting through the claims of virtual charter operators about the distinctiveness of their student populations or questioning whether a four-year cohort graduation rate makes sense for a charter serving former dropouts. In these cases, just as in the new study, you can hear some gear-grinding when conventional approval and oversight practices encounter unconventional learning models.
Three Signs also underscores the importance of authorizing itself. Some who advocate unfettered charter growth seem to view authorizers as petty bureaucrats interposing themselves between parents and the choices they want to make for their kids. But their work matters a lot, in ways that are obscured by a Catch-22 in charter research. As the New Orleans authors put it: “Since we can only observe the future performance of the stronger applications, we cannot include the worst applications when testing whether application materials predict performance.” In other words, researchers typically have to deal with the charters authorizers have approved, without the benefit of the actual “control group”—those they turned down. Since authorizers typically reject about twice as many applications as they approve, the success of this gatekeeping work is largely unexamined and undervalued.
So my attention was also captured by a second a second set of findings on why authorizers were likely to turn thumbs down on charter apps. It’s a pretty straightforward list that aligns well NACSA’s Principles and Standards for Quality Charter Authorizing: Schools are less likely to be approved if the application is wobbly in the financial area or displays a tenuous grasp of how to use data, or if it omits discussion of how to sustain a “culture of high expectations.” Another negative marker is having no plans to hire a management organization—which seems to bookend the “no identified principal” finding, but probably reflects the relatively high proportion of managed charters in the four states under review.
In any event, this is a substantial contribution that expands the portfolio of “pipeline” research—and more is on the way. NACSA has begun a large-scale examination of charter applications—including the two-thirds that are not approved—to get a better handle on how the dynamics of this process affect a range of outcomes including the slowing pace of sector growth.
For now, compliments to Nicotera and Stuit for diving into this new territory and coming up with some rich and well-presented findings.
Nelson Smith is Senior Advisor to the National Association of Charter School Authorizers.
The views expressed herein represent the opinions of the author and not necessarily the Thomas B. Fordham Institute.