Common Enrollment, Parents, and School Choice: Early Evidence from Denver and New Orleans
The Paperwork Pileup: Measuring the Burden of Charter School Applications
PARCC recalibrates the value/burden equation
Common Enrollment, Parents, and School Choice: Early Evidence from Denver and New Orleans
A voucher success story
Ohio high-flyers share the strategies behind their success
The Paperwork Pileup: Measuring the Burden of Charter School Applications
Common Core repeal: Ohio’s bad penny (part 1)
On May 18, another bill aimed at repealing Common Core in Ohio was introduced. House Bill 212 is far more troublesome than its many predecessors, mainly because it aims to do far more than repeal Common Core. Legislators should put this bill out to pasture, and here’s why.
The war on assessments
HB 212’s worst offense is that it declares war on a rigorous assessment system. First, the bill’s text calls for the adoption of Massachusetts’s pre-Common Core standards. (We've talked before about why Massachusetts decided to move away from its previous standards in favor of Common Core, and questioned why Ohio would want to pick up another state’s standards when that state has already decided they were no longer good enough.) In an effort to align standards with assessments, HB 212 also calls for the use of Massachusetts’s pre-Common Core tests—which is logical in this circumstance and definitely not the worst option as far as tests go. (This past year, Massachusetts allowed districts to choose between the state test, MCAS, and PARCC). Unfortunately, HB 212 also allows for the adoption of another test—the state assessments administered in Iowa prior to 2010. Currently, Iowa is a Smarter Balanced state. But back in 2010, Iowa used the Iowa Tests of Basic Skills (for elementary and middle schools) and the Iowa Test of Educational Development (for high schools). Many schools in Ohio already use the Iowa Tests of Basic Skills (often as a diagnostic for gifted programs), so the assessment wouldn’t be completely new to districts and schools. But using a test as a low-stakes, additional measure of achievement is different than making it a state assessment that determines report card grades. (Not to mention the fact that Iowa tests are designed to be norm-referenced, not criterion-referenced, which brings up a whole separate set of issues.) Furthermore, the Iowa tests aren’t aligned to Massachusetts’s standards—meaning that if districts choose to go with Iowa tests, they’d be choosing a test that isn’t aligned to the state’s chosen standards. Even more worrisome, HB 212 compels the state education department to create a “method for comparison” in order to calculate report card ratings. Comparing two distinct tests is tricky enough when the comparisons are for the sake of policy recommendations or research observations—but comparing tests in a valid and fair enough way to assign school letter grades? That’s really sticky territory. In short, it’s a mess.
Unfortunately, the bill’s move to multiple state tests (and the problems associated with them) isn’t the only questionable decision in its war on assessments. HB 212 not only eliminates the requirement for local school boards to report the results of kindergarten diagnostic assessments, it also eliminates the requirement that every student be given the kindergarten readiness assessment. Since good teaching requires diagnosing where kids start, it’s hard to imagine that districts will stop administering the diagnostic just because it’s no longer state law. But the removal still begs the simple question of why: If teachers need to know where kindergarten students are at the beginning of the year in order to meet their needs, and if Ohio Revised Code expressly forbids schools from using the readiness assessment as a way to keep students from enrolling in kindergarten, then why remove the requirement at all?
In a final strike against assessments, HB 212 gets rid of end-of-course American history and American government tests in high schools. This is a shocking move, considering recent NAEP data shows that only 18 percent of eighth graders are proficient or advanced in U.S. history. Only 27 percent are proficient in geography, and only 23 percent are proficient in civics. This isn’t a new problem either: A 2013 report from the Heartland Institute discusses American students' "alarmingly weak" grasp of history and civics, and my colleague Robert Pondiscio has argued that high schoolers should be required to pass the U.S. citizenship test. For a country that prides itself on its exceptionalism, the lack of history and civics knowledge among our children is appalling. We should be emphasizing the mastery of American history and government content—not belittling its value as a core subject worthy of assessment.
Cherry picking strategies
Much attention is given to Massachusetts in HB 212. This admiration for the Bay State isn’t misplaced: It consistently outperforms other states, and its students compete well on international tests. But if policymakers are hoping to graft Massachusetts results onto Ohio by adopting their standards and assessments, they’re missing a key component—improving teacher licensure.
During previous repeal attempts in Ohio, Common Core opponents brought in Sandra Stotsky to testify. Stotsky is often asked to testify because of her role on Common Core’s Validation Committee. But even more importantly, she worked in Massachusetts’s Department of Elementary and Secondary Education from 1999 to 2003. During her tenure at the department, Stotsky supervised comprehensive revisions to the same Massachusetts standards and assessments that HB 212 now seeks to time-warp into Ohio. The problem, however, is that HB 212 cherry picks Stotsky’s work. While Stotsky has strong feelings about the Common Core that opponents find useful, they often ignore her equally passionate argument that changes to the teacher licensing system were just as instrumental to the “Massachusetts education miracle” as high standards and assessments. In her new book, Stotsky fleshes out this argument by indicating that policymakers should be revamping the teacher licensure system if they really want to improve student achievement. Considering how much stock Common Core opponents put into what Stotsky says about standards, it’s interesting that a reform she champions as equally important has gotten virtually no attention.
Not only have the bill sponsors ignored Stotsky’s championed reform, their proposal would actually make the licensure system in Ohio worse (which is saying something, since it’s already seriously flawed). Ohio currently has a teacher residency program that was designed to provide support to teachers during their first four years through mentorship, collaboration with veteran educators, professional development, and assessment feedback. In order to successfully complete the program and move to a more advanced license, teachers must take and pass the Resident Educator Summative Assessment (RESA)—a performance-based assessment that requires a teacher to submit a portfolio of evidence and completed tasks. The assessment measures a teacher’s ability to design and deliver instruction that is engaging, emphasizes higher-order thinking, and uses data to drive instruction. Unfortunately, HB 212 mandates the removal of this assessment. In doing so, the bill eliminates the method by which Ohio teachers are judged to be ready for a professional license—with no alternatives in sight.
****
Considering Ohio’s troubling honesty gap, the 40 percent of Ohio graduates who take remediation courses upon arriving at college, and the whopping 68 percent of Ohio ACT takers who didn't meet the mark for college readiness last year, Ohio can’t afford to go backward. High standards, quality tests, and accountability are all key reforms that the Buckeye State must not eliminate—especially not for the sake of a politically motivated third attempt to repeal academic standards that have shown such promise.
PARCC recalibrates the value/burden equation
When it comes to the raucous debate over standardized testing, cooler heads might just prevail. In a recent move, PARCC announced changes to its exams starting in 2015–16. PARCC is a consortium of states working to design assessments aligned to the Common Core standards in math and English language arts; Ohio and ten other states administered PARCC for the first time in the 2014–15 school year. Dr. Richard A. Ross, Ohio’s superintendent of public instruction, sits on its governing board.
On May 20, the governing board voted in favor of two key changes that should alleviate some of the logistical burdens schools faced when administering these exams: eliminating one of the two “testing windows” and reducing the amount of testing time by roughly ninety minutes in all tested grades.
Collapsing two testing windows into one
The spring 2015 testing window for PARCC extended from mid-February to mid-May. That’s a long time. Of course, schools were not required to administer exams throughout the full testing window—they could use as few or as many of the days within the window as they needed. But for students, parents, and educators, the three-month window probably made “testing season” feel unusually long and drawn out. (In contrast, Ohio’s old state exams were administered over the course of roughly one month.) It also meant that testing interrupted classroom instruction for more of the school year—and earlier.
The reason for the long testing window was fairly simple: The assessment system included two exams. The first, the “performance-based assessment” (PBA), was given in February–March, and the second— the “end-of-year assessment” (EOY)—was given in April–May. The PBAs focused on students’ application of knowledge and skills (e.g., solving multi-step problems, explaining mathematical reasoning), while the EOYs focused more on traditional assessment items like reading comprehension or straightforward multiple-choice math problems. See for yourself the differences in the sample PARCC exams.
But starting in spring 2016, PARCC will be administered in one thirty-day testing window, occurring in the traditional testing period of April–May. Importantly, while the earlier PBA testing window is erased, some of PARCC’s performance-based tasks will be preserved in next year’s summative exam.
Reducing the amount of testing time
In response to concerns about overtesting, PARCC will reduce the amount of testing time on each of its subject-area exams. Specifically, the maximum amount of testing time allotted for each subject-area assessment will fall by roughly forty-five minutes. This means that testing time will fall by about ninety minutes per year for students in the tested grade levels (i.e., grades 3–8 and high school).
The drop in testing time means that in 2016, Ohio students will still sit for four to five hours on a math or English language arts assessment. (The time allotments for each subject-area assessment vary depending on grade level.) Next year’s PARCC exam, while shorter than the 2015 edition, will remain longer than Ohio’s old assessments, which clocked in at 2.5 hours per subject-area assessment. Thus, the slimmed-down version of PARCC will fall somewhere between the time needed to take this year’s edition of PARCC and Ohio’s old state tests—not a bad place to be.
Balancing an assessment’s value and burden
There’s a great chart in a July 2014 WestEd report on K–12 assessments prepared for the Colorado Department of Education. On the vertical axis is the word “value”; on the horizontal axis is “burden.” The two dimensions—value versus burden—is an intuitive way to think about the trade-offs in state assessment programs.
On the one hand, we want standardized exams that add value by providing essential (and actionable) information for parents and policymakers. Longer, more probing assessments typically yield a clearer understanding of a students’ true knowledge and abilities within a content area. (As Harvard professor Daniel Koretz takes great pains to point out, standardized exams capture a sample of knowledge, abilities, and behaviors within a larger domain.) PARCC, with its emphasis on demanding performance-based tasks, is designed to provide a richer, clearer understanding of the true skills and abilities of students—that’s the distinct advantage of PARCC over the old fill-in-the-bubble assessment regime.
But on the other hand, we also need to be careful that standardized exams don’t put an unnecessary burden on schools, eating into instructional time and creating scheduling headaches. One could imagine a technically “perfect” exam that would also be impractical to administer due to schools’ constraints on time and technology—and opposed by parents and citizens. At the end of the day, policymakers—and test designers—need to find the optimal ground between maximizing assessments’ value and minimizing their burden.
After a first round of testing, PARCC has re-calibrated—wisely, it would appear—its value/burden equation. The governing board of PARCC has altered the exam in order to relieve some of the testing burden that schools face, though the abridgement could diminish the desirable technical properties of the longer, more in-depth exam, as PARCC advisor Robert Brennan notes. Even with the changes, PARCC should still be considered far and away superior to Ohio’s old assessments. Recall that Ohio’s old tests were weak, largely multiple-choice tests pegged to an abysmal proficiency standard—exactly what most Ohioans don’t find valuable and want to get away from.
The question now is what, if anything, the state legislature will do. Before the PARCC revisions were announced, the Ohio House—clearly frustrated by this year’s rocky testing season—passed a bill (HB 74) which, if enacted, would forbid the administration of PARCC. (It would require the Ohio Department of Education to procure another assessment.) The legislation has been sent to the Senate.
State lawmakers, however, should take note of the commendable rebalancing that PARCC is undertaking. The governing board of PARCC has shown itself to be reasonable. Let’s hope the Ohio legislature will exercise wisdom and good judgment as well.
Common Enrollment, Parents, and School Choice: Early Evidence from Denver and New Orleans
In 2012, Denver and New Orleans became the first two cities in the country to utilize a common enrollment system that included both district-run and charter schools. A new report from the Center on Reinventing Public Education (CRPE) takes a look at the benefits, limitations, and implications of these common enrollment systems. Both cities are widely regarded as leaders in developing well-functioning school marketplaces; for example, a recent Brookings institution report awarded New Orleans top marks in its “education choice and competition index” (Denver was rated sixth-best out of more than one hundred metropolitan areas).
In both cities, the enrollment systems were designed to make choosing a school a clearer and fairer process for all families. They employ a single application with a single deadline that parents use to apply for any and all schools within the city. But the systems themselves are different: In New Orleans, students have no assigned school; instead, every family must use the OneApp to apply for schools. In Denver, however, choice is voluntary—students receive a default assignment, but the SchoolChoice application allows families (if they want) to apply to any public school in the city.
Despite these differences, both Denver’s and New Orleans’s common enrollment systems have demonstrated similar benefits, according to the authors. The enrollment and the matching process have become more fair, transparent, and consistent. Families choose schools more effectively and confidently, armed with information via school fairs, resources centers, and parent guides (which use common metrics to communicate school performance). And school leaders—thanks to common enrollment data that doubles as feedback—have formulated a clearer picture of the kinds of schools parents want.
The authors also point out a few areas in which the systems in Denver and New Orleans can improve. They include parents wanting more detailed and personalized information; misunderstanding of the school matching process (which, in some cases, inadvertently led to a reduced chance of finding a desirable match); and the troubling fact that in Denver, where participation is a choice, minority and low-income families are less likely to participate than white and affluent families (the authors note that while comparing choice participation rates is difficult, gaps in participation existed before the introduction of the common enrollment system).
The biggest area for improvement, however, has little to do with the original purpose of common enrollment systems: While the application and enrollment process has gotten simpler, parents in both cities say that there aren’t enough high-quality schools to choose from. Indeed, the researchers point out that “demand is heavily concentrated in a handful of schools.” Thus, although education leaders in New Orleans and Denver are working to improve already-promising common enrollment systems, those systems will remain severely limited until both cities increase the number of high-quality schools/seats available to their families. Nevertheless, kudos to both cities for devising a common enrollment system that includes both district and charter schools. Their example is one that Cleveland is already following; in September, the Transformation Alliance received a grant to develop a universal enrollment system.
SOURCE: Betheny Gross, Michael DeArmond, Patrick Denice, “Common Enrollment, Parents, and School Choice: Early Evidence from Denver and New Orleans,” Center on Reinventing Public Education (May 2015).
A voucher success story
Sam Myers was among the first recipients of Ohio’s Jon Peterson Special Needs Scholarship, starting at Mansfield Christian School in the fall of 2012. It sounds simple, but the fight for the Myers family to access the school that best fit Sam’s academic needs was anything but easy.
For a look at that struggle, ably supported by the good folks at School Choice Ohio, take a look at this video:
One family’s hard work and persistence – to find an answer when others may not have even seen a problem – paid off not only for them but for thousands of others across Ohio.
Fast-forward to May 30, 2015, when Sam graduated from Mansfield Christian. A number of big-name well-wishers lined up to congratulate him. You can hear their own heartfelt words in this video:
In the words of our governor: you rock, Sam. Congratulations and best wishes for the great future ahead of you.
Ohio high-flyers share the strategies behind their success
Recently I had the privilege of listening to practitioners from Ohio’s high-performing districts who shared how they’re achieving success. These districts are earning A grades on their state report cards in notoriously difficult areas such as closing achievement gaps, effectively serving gifted students and students with disabilities, and increasing student achievement across the board.
[[{"fid":"114349","view_mode":"default","fields":{"format":"default"},"type":"media","link_text":null,"attributes":{"style":"height: 239px; width: 500px; float: right;","class":"media-element file-default"}}]]
The series of events was hosted by Battelle for Kids in conjunction with the Ohio Department of Education, and I was able to hear from five of the exemplary districts: Marysville, Orange City, Oak Hills Local, Solon City, and Mechanicsburg. Here are the important commonalities I found among the strategies discussed.
1. Plus time
This strategy goes by a different name depending on which district you visit: “no-new-instruction time,” “flex time,” “plus time,” and “support classes” were all terms I heard, but the basic idea was the same. Each of these high flyers altered their daily schedule so that students received around forty minutes a day of either enrichment or remediation. To be clear, this isn’t an additional class in which students learn new information; instead, this is a time for students to either solidify or improve on what they know. How this looks on the ground depends on the district culture. In Solon, students with IEPs, students who failed their previous state assessment, and students struggling in a particular class receive additional support. The type of support, lesson topics, and the students in attendance are determined by student-specific data that teachers analyze and then use to plan instruction. In Marysville, every student in the district takes part in flex time, which is a daily forty-five-minute period organized into a nine-week class. These classes are designed and planned by teachers with a focus on remediation or extension. For example, a Marysville intervention specialist explained that while one student may receive intensive small-group instruction on pre-algebra concepts, other students may take part in a robotics course or band.
2. Data analysis and internal accountability
If there was one clear mantra throughout each presentation, it was the importance of creating, tracking, and acting on data. For example, Orange City talked about how their teachers leveraged common planning time to hold data meetings where teachers could not only analyze their own data, but share ideas for how to use the data to improve the practice of their colleagues. Oak Hills explained how their teachers complete walkthroughs of colleagues’ classrooms, share their observations, and create a system of data for monitoring effective teaching outside of the state’s teacher evaluation system. Perhaps the best use of data was in Solon, where district representatives emphasized that systems don’t matter unless the data is actually used to drive instruction. By using real-time data from common assessments that are created by teachers, educators in Solon are able to pinpoint (down to the level of individual test questions) areas for growth, brainstorm with other teachers and their principal (in a non-evaluative environment) ways to improve, and purposefully determine where students should be placed during flex time.
Several districts also mentioned quarterly data meetings in which principals share with and are accountable to superintendents about building-level data. This internal accountability is equally as important as tracking and acting on data. One district representative explained it this way: “Data isn’t about judgment; it’s about trying to uncover ways to do it better.” However, once those ways are pinpointed, “decisions are made and justified with data…and we hold ourselves accountable to the people we’re serving.” These accountability sentiments were echoed by several districts, making one thing clear: highly effective schools hold themselves accountable for being highly effective every day, all year—not just when school report cards are published.
3. Leveraging teacher talent and leadership
Professional development for teachers is incredibly important but notoriously bad. So how do high-performing districts make professional development better? By using what they have. A common thread among the high flyers is a desire to pinpoint their talented people and then, instead of patting them on the back and moving on to the next emergency, leveraging them into making the district better. For example, Marysville (which recently earned an A on their report card for serving students with disabilities) utilizes a nearly universal co-teaching model. In order to make the co-teaching model more effective, administrators identified a general education teacher and intervention specialist who had mastered the model (again using student achievement data to prove this) and then allowed them to lead professional development sessions on co-teaching. District representatives shared that co-teaching effectiveness skyrocketed after this session and subsequent collaboration—and their report card grade backs that up. But Marysville isn’t the only example. Representatives from each district explained how their administration works to give the best teachers influence over building and district decisions and opportunities to collaborate with other teachers, particularly during common planning times. Most importantly, these districts allow their highly effective teachers to share best practices as experts. After all, why spend money for outside experts when there are highly effective, proven experts in your midst?
4. High expectations and rigor for all students
This strategy was best explained by Oak Hills's assistant superintendent, who described precisely what high expectations for all students means: “All means all, and that’s all that all means.” In other words, although time and resources for students may need to vary depending on their needs, every student should be receiving rigorous instruction—every student, not just those expected to pass state tests. Orange City echoed these sentiments when they talked about the power of de-tracking students. Instead of placing students in contained or low-level class sequences, Orange City utilized inclusion for all but their most severely disabled students and required every student to be on a rigorous class track regardless of their performance history. By ensuring that the lowest-performing students were exposed to the same curriculum and expectations as other students—and by incorporating plus time—the district was able to earn an A on their report card grade for the lowest 20 percent of achievers.
***
Although each of these high-performing districts undoubtedly has dozens of practices and policies that contribute to their state report card success--all of which could use deeper analysis than is possible here—the four listed above are a starting point for any district wishing to emulate that success. Time and time again, representatives from these districts emphasized that the achievement of their schools wasn’t because of some silver bullet or secret recipe, but a simple combination of identifying a strategy that works and acting on it—consistently and faithfully.
The Paperwork Pileup: Measuring the Burden of Charter School Applications
According to a paper released this week by the American Enterprise Institute, charter authorizers are putting too many meaningless application requirements on organizations that propose to open schools, thereby limiting school autonomy and creating far too much red tape.
The report shares lessons, provides authorizer Dos and Don’ts, and divides charter application criteria into categories of appropriate and inappropriate based on AEI’s analysis of application requirements from forty authorizers around the land. The authors conclude that:
- Charter applications could be streamlined to eliminate one-quarter of existing content
- Authorizers may mistake length for rigor
- The authorizer’s role is sometimes unclear
- While there is much authorizer lip service for innovation, the application process doesn’t lend itself to fleshing out truly innovative school models
AEI correctly notes the importance of the authorizer’s role as gatekeeper for new schools and points out that authorizers should establish clear goals, hold schools accountable, review key aspects of school applications for developer capacity, and monitor compliance and finances. Authorizers shouldn’t see themselves as venture capitalists, assume the role of school management consultants, deem themselves curriculum experts, or feel entitled to include pet issues in applications.
All true, and all wise. Where it gets sticky—and where this report makes a wrong turn—is distinguishing what kinds of information are legitimate and important for authorizers to seek at the application stage and what kinds are superfluous, or even dysfunctional. In the end, as the National Association of Charter School Authorizers (NACSA) also points out, the AEI authors put too much in the latter basket.
Fordham has been authorizing charter schools in Ohio for nearly ten years. We’ve walked this walk—and sweated and agonized and made tough calls and a few mistakes. Let me summarize how our experience does and doesn’t align with the authors’ conclusions.
Yes, some authorizers mistake volume for rigor; just because the application is umpteen pages long does not mean it’s a good application. And yes, sometimes the authorizer’s role is unclear. Nowhere is that more apparent than in Ohio, where we are still trying to get authorizers out of the business of selling services to their authorized schools. (One can’t objectively monitor a school’s performance if one has a hand in its operations.) And all too often we see “proven models” favored over potentially new and innovative ones. It’s easy to forget that well-known successes like KIPP and Success Academy—and in Ohio, United Schools and the Breakthrough network—were also unproven once.
Now to our disagreements. AEI’s list of overly burdensome requirements is too long and in some cases rules out information that we’ve found valuable, even vital, in determining which proposed charters deserve to be authorized.
- Explaining why an applicant proposes certain goals and performance metrics: AEI thinks this is excessive. In our experience, you're an irresponsible authorizer if you don't ask about this. Let's say the goals and expectations for students are low—why not have them explain? What if you think the goals are set too high? Why not ask them their thoughts? They may have a perfectly good explanation. What if the goals don't align with the curricular and instructional program (e.g., the application references STEM but contains no STEM-related outcomes)?
- Explaining how the school will meet all students’ needs: It's not unusual for applications to be very strong with a general education population but weak when it comes to educating children with disabilities, gifted students, or students with limited English proficiency. If an applicant can't articulate the basics on paper or in the interview, that applicant probably lacks knowledge regarding these areas—and may lack a plan for how to serve these kids. Asking how the school will go about it is critical.
- Presenting curriculum samples and justifying the choice: Let's say the curriculum will be teacher-created (or will rely on some new, off-the-shelf material). I'm not a curriculum expert, but I'd want someone on the review team to focus on whether it has substance. I've seen a number of applications for "Classical" schools recently, but very few of them, according to the review team, were truly taking a classical approach. Why not raise such questions? They may have a good explanation, but either way, you build your evidence base for a decision of whether or not to approve.
- Explaining how instructional methods will serve students: Let's say the school proposes a project-based model, yet there's much discussion in the application about what appears to be direct instruction. Could there be a reasonable explanation? The applicant should have an opportunity to put it forth.
- Justifying the choice of financial strategies/goals: Too many applications don't present viable financial plans. If an applicant proposes a budget in which the fund balance every year is extremely low and the margins very thin, you should ask about that. Maybe the school will be supported by an external organization. But what if it’s heavily dependent on borrowing? What if the proposers claim they can do it all on the state dollars? These are all issues that demand discussion.
- Explaining the advertising plan: Competition for enrollment is very tough. If an authorizer green-lights a school to receive millions in public funding, it is completely appropriate to ask how the advertising plan will help meet enrollment targets. Is the applicant just putting up billboards, or will it go door-to-door? Are they using strategies that we know work in a particular community? Do they show evidence of understanding their community?
- Explaining the plan to provide meals: There are federal funds involved here, as well as myriad compliance issues. Sure it’s appropriate to ask how this will work.
- Explaining any innovations to be used in the school: If something is truly innovative, why not talk about it before it hits the ground? What’s the basis for expecting it to work? There also might be something there to watch and share with others.
- Offer a rationale for choosing a specific location/community: This goes back to financial viability. If you already have ten schools in a ten-block area and someone is proposing an eleventh, it's perfectly reasonable to ask why the proposed school wants to locate there. If the authorizer approves the school, and it can’t meet targets and closes (especially mid-year), we put children and families in a terrible position, not to mention wasting many state and federal dollars.
Yes, some charter applications contain meaningless and excessive requirements, likely mandated by state laws and regulations. This paper should serve to catalyze further conversation on the topic of charter school autonomy and accountability, and where the problems (on both sides) arise. But it’s a mistake to place too much emphasis on shortening the charter application itself or to lessen the importance of key application requirements that ultimately strengthen the viability of the whole.
SOURCE: Michael Q. McShane, Jenn Hatfield, and Elizabeth English, “The Paperwork Pileup: Measuring the Burden of Charter School Applications,” American Enterprise Institute (May 2015).