The Fordham Institute recently released Three Signs that a Proposed Charter School is at Risk of Failing, a study of over six hundred charter applications that aims to identify risk factors that make a potential charter school more likely to perform poorly during its early years. As the leader of Fordham’s authorizing team in Ohio, I was eager to read the report to see whether it aligns with what we see when reviewing applications and, subsequently, authorizing brand new schools.
Indeed, one or more of the report’s top three identified “flags”—in our experience—are usually present in weak charter applications:
- Failure to identify a school leader for a self-managed school
- “High risk, low dose”/misalignment of programming: applications whose target population is “at-risk” youth, yet the application fails to include sufficient academic supports (e.g., intensive small group instruction, extensive tutoring, etc.) to serve that population
- The use of child-centered, inquiry-based instructional models (e.g., Montessori, experiential, etc.)
These “flags” make sense. Self-managed schools—those not supported by a larger network—typically lack access to deep and consistent talent pipelines, and often have a harder time finding and retaining high quality school leaders. Misalignment of programming is another problem. If an application proposes to serve an at-risk population, a big red flag is an application narrative and budget that don’t show sufficient programming and finances. And although child-centered, inquiry-based instructional models can be excellent, the report’s authors correctly point out that our current environment of standards-based accountability mean that these models may need to be adapted for accountability purposes, thereby potentially limiting what made the educational approach novel in the first place.
There are, of course, always exceptions, and none of these findings should be used as criteria to automatically reject or approve charter applications—all of which the report authors acknowledge. And they’re right: Good authorizers rely on industry best practices to do their work. Yet sometimes—like in any sector—there are professional judgment calls you make because experience tells you the situation merits it.
We at Fordham’s authorizing shop have seen a lot of applications over the years—stellar ones, good ones, mediocre ones, and poor ones. There are always issues that raise questions or concerns, even in the best applications. Where you have a promising application with a few flags, what matters most (in our experience, anyway) is what these issues are, the level of quality of the rest of the application, and the capacity of the development team. For example, we’re OK if you don’t identify a school leader, as long as you: (1) have a very well detailed and viable process for how you will find, hire, and retain that person; (2) deal with succession; (3) there are no other major issues in the application; and (4) we can tell from your interview that you have a high caliber founding team capable of getting the job done.
We’re also OK with child-centered, inquiry-based schools. In fact, the one new school that we did approve for 2017–18 uses “Expeditionary Learning.” No other school in our portfolio utilizes a remotely similar approach; however, the school’s application and interview were very good, they have an excellent founding team, and they have a track record of developing similar, high-performing models. We’re confident that they will do a great job serving kids.
We also once had an applicant that wrote a very good application overall, yet certain aspects of its first-year cash-flow budget were wrong; they were an out-of-state applicant and missed some of the Ohio-specifics in terms of timing the flow of funding. However, the rest of the application was solid, the development team highly capable, and the applicant was a recipient of a sizeable amount of grant funding. Under those circumstances, we believed they were a safe bet.
I offer these examples to show that a lot of authorizing ties back to best practices, though sometimes there are promising candidates and you have to weigh whether to turn down the application and consequently preclude an otherwise good school from opening or, whether based on the entirety of the application and interview, the flags are manageable and can be sufficiently addressed.
One of the report’s other data points that caught our eye was this: where an application has two or more of the three flags listed above and is approved, the school has an eighty percent chance of being a low performer. That’s pretty compelling, though it makes sense, too. If you don’t have a school leader and your programming is misaligned to the students you’re proposing to serve, it’s hard to see how in operation the school would be successful. That’s like proposing a business plan for a bakery without identifying a CEO, and saying you’ll use grills to make the baked goods.
From a practitioner’s standpoint Three Signs is a good first step toward quantifying a key aspect of authorizing. As the author’s note, the report may be most useful for authorizers that are inundated with applications and need an initial scan—beyond just whether anything is missing—to mark certain applications for a more thorough review.