New model law from the National Alliance acknowledges the elephant in the room: Virtual charter schools
By Chad L. Aldis and Jessica Poiner
By Chad L. Aldis and Jessica Poiner
Ohio’s charter school movement has faced a number of challenges over the past decade. A myriad of school closings and allegations of financial misconduct contributed to it being dubbed the Wild, Wild West of charter schools. Making matters worse, a comprehensive analysis in 2014 by Stanford University’s Center for Research on Education Outcomes (CREDO) found that, on average, Ohio charter students lost fourteen days of learning in reading and forty-three days of learning in math over the course of the school year compared to similar students in traditional public schools. To its credit, the Ohio General Assembly recognized these problems and in October 2015 passed House Bill 2 (HB 2)—a comprehensive reform of the Buckeye State’s charter school laws.
While HB 2 has only been in effect since February, there are already signs that the movement is changing for the better in response to the new law. Unfortunately, despite great strides forward, there is one group of charter schools in Ohio that’s still causing serious heartburn for charter school proponents and critics alike: full-time virtual charter schools. Attendance issues, a nasty court battle, the possibility that the state’s largest e-school (ECOT—The Electronic Classroom of Tomorrow) could have to repay $60 million in state funding, and poor academic performance have led to a growing push to improve e-schools.
The problem in Ohio is clear, but the problem isn’t limited to Ohio, especially in regards to low academic achievement. A seminal national study by CREDO released in October 2015 found that students in online charter schools across the nation struggled mightily, losing on average 72 days per year in reading and a jaw-dropping 180 days per year in math.
As we all know, identifying problems is easy. The difficulty is in finding solutions. Fortunately, the recently released model charter school law from the National Alliance for Public Charter Schools (National Alliance) offers a half dozen policy ideas intended to address the growing issues posed by online charter schools. These new model provisions include language addressing authorizing structure, enrollment criteria, enrollment levels, accountability for performance, funding level based upon costs, and performance-based funding. The National Alliance acknowledges that each of these potential solutions won’t universally apply given the unique context of each state’s laws, but it’s worth looking at how four of the model law provisions might impact Ohio.
Performance-based funding
The National Alliance, in one of its most controversial recommendations, suggests that states fund full-time virtual schools via a performance-based funding system. This idea is simple and intuitive on its face, and it confronts head on the student achievement challenges that online charter schools pose to policymakers. However, widespread low achievement in the movement means that implementing performance-based funding would have an enormous impact, making it both technically and politically complicated to implement. The topic has been broached in Ohio, as State Auditor Dave Yost recently called on the General Assembly to examine “learning-based funding,” which would pay e-schools for successfully delivering—not just offering—education. Despite its complex nature, states considering this type of policy don’t have to start from scratch and should investigate similar models being pursued in a handful of states.
Accountability for performance
The model law suggests that charter contracts for online schools include additional measures in a variety of areas where full-time virtual schools have typically struggled, such as student attendance and truancy. Determining how to track attendance in a virtual school setting is difficult, but states have an obligation to online charter schools and their students to set clear guidelines for attendance. Fortunately, Ohio law already has a pretty clear expectation thanks to the aforementioned HB 2: “Each internet- or computer-based community school shall keep an accurate record of each individual student’s participation in learning opportunities each day.” While this is a great start, the law could be improved by making it clear how full-time virtual schools will be held accountable for student attendance and participation and for determining how to account for learning by online students that happens when the student isn’t “logged in” to his or her computer.
Enrollment levels
The National Alliance also recommends that states require authorizers to set maximum enrollment levels each year for full-time virtual schools, and that those levels increase based on performance rather than time. Ohio has enrollment restrictions in place, but the limit is based upon year-over-year growth and isn’t impacted by performance. Furthermore, because the size of the movement was already large when the enrollment growth limits—15 percent for schools with more than 3000 students and 25 percent for schools with fewer than 3000 students—were enacted in Ohio, there really hasn’t been much of an impact. States considering adopting this model law would be wise to consider the size of its existing movement and ensure that academic success includes both proficiency and student growth. In the long term, managing enrollment growth could help ensure that the most successful online schools are able to serve the most students. It could also prevent an individual online charter school from becoming “too big to fail” (i.e. closing the school would be too disruptive to students) or too politically powerful to properly hold it accountable for academic performance.
Enrollment criteria
Charter schools, including online schools, are public schools and must enroll all interested students. This has always been a core principle, but the new model law acknowledges that this idea may need to be reexamined in the context of full-time virtual schools. Because of the incredibly low student achievement of online charter school students, it’s becoming increasingly clear that students without strong learning supports and/or the proper preparation are struggling mightily in an online environment. A recent study from the Thomas B. Fordham Institute shows that Ohio e-school students are lower-achieving, more likely to have repeated a grade, and more likely to be low-income than other students. In other words, e-school students are those who are most desperately in need of a quality education. Unfortunately, the same study shows they’re not getting it: Across all grades and subjects, e-school students have lower performance in math and reading than otherwise-similar students who attend brick-and-mortar district schools. There’s a pretty significant moral quandary here: If full-time virtual schools consistently fail to serve a certain subset of students—a subset that’s most in need of a quality education—then at what point do they forfeit their right to educate these students?
There are two potential solutions here. The first is to transition virtual schools out from under the charter umbrella and establish them as their own type of public school. This would allow them to establish enrollment criteria, much like magnet schools operated by many school districts. This change would allow online charter schools to serve the students who would most benefit from their model without causing potentially irreparable academic harm to enrolled students who aren’t a good fit. In addition, by allowing virtual schools to determine whom they can best serve, it would be easier and fairer to hold them accountable for student achievement under a state accountability system.
The second option is to continue to require virtual schools to serve everyone but build some flexibility into the law. For example, recent changes in HB 2 explicitly allow Ohio full-time virtual charter schools to require an orientation course for new students. Allowing parents and students to better understand from the beginning the expectations and responsibilities inherent in online education is critical. Another policy option would be to require full-time virtual charter school leaders and teachers to engage with students and parents when students fall behind or struggle to meet attendance requirements. If counseling and conferences fail to address the issues, schools could even be required to assist a student to find a more traditional charter public or district school.
The National Alliance deserves praise for developing policy options that could address the appallingly low performance of many full-time virtual charter school students. There are too many students exercising this important educational option to simply turn a blind eye to its still-developing structure. As should be clear from examining how some of the model law’s recommendations would apply in Ohio, this isn’t going to be easy. Policies will—and should—vary considerably from state to state. Overall, the model law provides a great starting point for states when deciding how to help their online charter schools better serve students, and it couldn’t have come at a better time.
Editor’s note: This article was originally published on the National Alliance for Public Charter Schools’ Charter Blog.
Back in 2011, the Obama administration released its plan for improving teacher education. It included a proposal to revise Title II regulations under the Higher Education Act to focus on outcomes-based measures for teacher preparation programs rather than simply reporting on program inputs. It wasn’t a smooth process. Serious pushback and a stalemate on a federal “rulemaking” panel followed. Draft regulations were finally released in 2014, but were immediately met with criticism. Many advocates wondered if the regulations would ever be finalized.
On October 12, the wondering ceased—the U.S. Department of Education at last released its final teacher preparation regulations. While the final rules number hundreds of pages, the provisions garnering the most attention are those outlining what states must annually report for all teacher preparation programs—including traditional, alternative routes, and distance programs. Indicators are limited to novice teachers[1] and include reporting placement and retention rates of graduates during the first three years of their teaching careers, feedback via surveys on effectiveness from both graduates and employers, and student learning outcomes. These indicators (and others) must be included on mandatory institutional and state teacher preparation program report cards that are intended to differentiate between effective, at-risk, and low-performing programs.
The public nature of the report cards ensures a built-in form of accountability. States are required to provide assistance to any program that’s labeled low-performing. Programs that fail to earn an effective rating for two of the previous three years will be denied eligibility for federal TEACH grants, a move that could incentivize aspiring teachers to steer clear of certain programs.
What do these new federal regulations mean for the Buckeye State? Let’s take a closer look.
The Ohio Department of Higher Education already puts out yearly performance reports that publicize data on Ohio’s traditional teacher preparation programs. Many of the regulations’ requirements, like survey results and student learning outcomes, are included in these reports, so the Buckeye State already has a foundation to work from. But right now, Ohio releases its performance reports for the sake of transparency. Institutions aren’t differentiated into performance levels, and there are no consequences for programs that have worrisome data. In order to comply with the federal regulations, Ohio is going to have to start differentiating between programs—and providing assistance to those that struggle.
Helpfully, the differentiation into three performance levels occurs at the program level, not at the institutional level. This matters because the institutional label is an umbrella that covers several programs, and programs don’t always perform equally well. For example, in NCTQ’s 2014 Teacher Prep Review, the University of Akron’s (UA) undergraduate program for secondary education earned a national ranking of 57. But UA’s graduate program for secondary education earned a very different grade—a national ranking of 259. Using NCTQ’s review as a proxy for the upcoming rankings reveals that grouping all the programs at a specific institution into one institutional rating could hide very different levels of program performance.
Meanwhile, the regulations’ student learning outcomes indicator presents an interesting challenge. This indicator requires states to report annually on student learning outcomes determined in one of three ways: student growth (based on test scores), teacher evaluation results, or “another state-determined measure that is relevant to students’ outcomes, including academic performance.”
Requiring teacher preparation programs to be evaluated based on student learning won’t be easy for Ohio (or many other states). If Ohio opts to go with student growth based on test scores, it’s likely this will mean relying on teachers’ value-added measures. If this is indeed the case, the familiar debate over VAM is sure to surface, as is the fact that only 34 percent of Ohio teachers actually have value-added data available[2]. Even if Ohio’s use of value-added is widely accepted, methodological problems also exist. For instance, the federal regulations’ program size threshold is 25 teachers, and smaller preparation programs in Ohio aren’t going to hit the mark each year. This means that while bigger programs are going to be held accountable for student learning outcomes during graduates’ first three years of teaching, smaller programs aren’t going to be held to the same standard. There’s also the not-so-small problem that value-added data are most precise when they take into account multiple years of data—and novice teachers simply won’t have multiple years of data available.
Using overall teacher evaluation results isn’t a much better alternative. The Ohio Teacher Evaluation System (OTES) needs some serious work—particularly in the realm of student growth measures, which could imprecisely evaluate teachers in many subjects and grade levels due to the use of shared attribution and Student Learning Objectives (SLOs). The third route—using “another state determined measure”—is also challenging. If there was a clear, fair, and effective way to measure student learning without focusing on test scores and teacher evaluations, Ohio would already be using it. Unfortunately, no one has been able to come up with anything yet. The arrival of new federal regulations isn’t likely to inspire a sudden wave of quality ideas.
In short, none of the three options provided for measuring student learning outcomes is a good fit. Worse yet, Ohio is facing a ticking clock. According to the USDOE’s timeline, states have the 2016-17 school year (which is already half over) to analyze options and develop a reporting system. States are permitted to use the 2017-18 school year to pilot their chosen system, but systems must be fully implemented by 2018-19. Whatever the Buckeye State plans to do in order to comply with the regulations, it’s going to have to make up its mind fast.
While the regulations’ call for institutional and state report cards is a step in the right direction in terms of transparency and accountability, implementation is going to be messy and perhaps impossible. There are no clear answers for how to effectively evaluate programs based on student learning outcomes. Furthermore, the federally imposed regulations seem to clash with the flexibility that the ESSA era was supposed to bring to the states.[3] Unless Congress takes on reauthorization of the Higher Education Act, it looks like states are going to have to make do with flexibility under one federal education act and tight regulations (and the resulting implementation mess) under another.
[1] A novice teacher is defined as “a teacher of record in the first three years of teaching who teaches elementary or secondary public school students, which may include, at a state’s discretion, preschool students.”
[2] The 34 percent is made up of teachers whose scores are fully made up of value-added measures (6 percent); teachers whose scores are partially made up of value-added measures (14 percent); and teachers whose scores can be calculated using a vendor assessment (14 percent).
[3] It’s worth noting that the provisions related to student learning outcomes did undergo some serious revisions from their original state in order to build in some flexibility. The final regulations indicate that the Department backed off on requiring states to label programs effective only “if the program had ‘satisfactory or higher’ student learning outcomes.” States are also permitted to determine the weighting of each indicator, which includes determining how much the student learning outcomes measure will impact the overall rating.
Hopes are high for a new kind of school in Indianapolis. Purdue Polytechnic High School will open in the 2017-18 school year, admitting its first class of 150 ninth graders on the near Eastside. It is a STEM-focused charter school authorized by Purdue University that will utilize a project-based multidisciplinary curriculum intended to give graduates “deep knowledge, applied skills, and experiences in the workplace.”
The location of the school in the Englewood neighborhood is a deliberate step for Purdue, which is aiming to develop a direct feeder for low-income students and students of color into Purdue Polytechnic Institute in West Lafayette. To that end, the high school will teach to mastery—each student moving on to the next level in a subject once they have demonstrated mastery at the current level. If that requires remediation of work, so be it. The school model is designed to keep students engaged, challenge them to reach their maximum potential, and meet high expectations. More importantly, a high school diploma will be “considered a milestone rather than an end goal,” according to the school’s website. College is the expected next step for all Purdue Polytechnic High School graduates. In fact, the high school’s curriculum is modeled on that of Purdue Polytechnic Institute in order to make the transition between the two seamless—minus 65 miles or so.
Shatoya Jordan and Scott Bess have been chosen to lead the new school as principal and head of school, respectively. Both were recently named to the latest class of Innovation School Fellows by The Mind Trust.
Applications for the first class opened last week and hopes are high that this innovative school model will open new doors for students in need of high quality options. Other states, including Ohio, should take note. This partnership could pay big dividends for Purdue, the community, and most importantly, the many low-income students who will have a new opportunity to advance. Hats off to Purdue for supporting this effort.
To ensure that pupils aren’t stuck in chronically low-performing schools, policymakers are increasingly turning to strategies such as permanent closure or charter-school takeovers. But do these strategies benefit students? A couple recent studies, including our own from Ohio and one from New York City, have found that closing troubled schools improves outcomes. Meanwhile, just one study from Tennessee has examined charter takeovers, and its results were mostly inconclusive.
A new study from Louisiana adds to this research, examining whether closures and charter takeovers improve student outcomes. The analysis uses student-level data and statistical methods to examine the impact of such interventions on students’ state test scores, graduation rates, and matriculation to college. The study focuses on New Orleans and Baton Rouge, with the interventions occurring between 2008 and 2014. During this period, fourteen schools were closed and seventeen were taken over by charter management organizations. Most of these schools—twenty-six of the thirty-one—were located in New Orleans. The five Baton Rouge schools were all high schools.
The study finds that students tend to earn higher test scores after their schools are closed or taken over. In New Orleans, the impact of the interventions was positive and statistically significant on state math and reading scores. New Orleans high-schoolers also experienced an uptick in on-time graduation rates as a result of the interventions, though the Baton Rouge analysis reveals a negative impact on graduation (more on that below). No significant effects were found on college-going rates in either city. With respect to intervention type, the analysis uncovers little difference. Both closure and charter takeover improved pupil achievement. Likewise, the effects were similar on graduation rates—overall neutral when taking together both cities’ results.
More importantly, the research indicates that these intense interventions benefit students most when they result in attendance in a markedly better school. Post-intervention, New Orleans students attended much higher-performing schools, as measured by value added, while in Baton Rouge, students landed in lower quality schools, perhaps explaining the lower graduation rates. Furthermore, the analysis suggests that the positive effects are more pronounced when schools are phased out over time—that is, the closure or takeover is announced and no new students are allowed to enroll—thus minimizing the costs of disruption. These results largely track what we found in Ohio, where students made greater gains on state tests when they transferred to a higher-performing school post-closure.
While not well liked by the general public, the hard evidence continues to accumulate that, given quality alternatives, students benefit when policymakers close or strongly intervene in dysfunctional schools.
SOURCE: Whitney Bross, Douglas N. Harris, and Lihan Liu, The Effects of Performance-Based School Closure and Charter Takeover on Student Performance, Education Research Alliance for New Orleans (October 2016).
“If schools continue to embrace the potential benefits that accompany surveillance technology,” assert the authors of a new report issued by the National Association of State Boards of Education (NASBE), “state policymakers must be prepared to confront, and potentially regulate, the privacy consequences of that surveillance.” And thus they define the fulcrum on which this seesaw of a report rests.
Authors J. William Tucker and Amelia Vance do not exaggerate the breadth of education technology that can be used for “surveillance,” either by design or incidentally, citing numerous examples that range from the commonplace to ideas that Big Brother would love. We are all familiar with cameras monitoring public areas in school buildings, but as police use of body cameras increases, school resource officers will likely be equipped with them as well. The authors note that a district in Iowa even issued body cameras to school administrators. (Our own Mike Petrilli wondered a few years about putting cameras in every classroom.)
Cameras have been commonplace inside and outside of school buses for years, but now student swipe cards and GPS bus tracking mean that comings and goings can be pinpointed with increasing accuracy. Web content filters are commonplace in school libraries, but the proliferation of one-to-one devices has led to monitoring applications for use both in the classroom and in students’ homes. Even a student who provides his or her own laptop can be fully monitored when using school Wi-Fi networks. Social media monitoring of students is an imprecise science, but the authors report it is becoming more sophisticated and more widespread in order to identify cyberbullying incidents or to predict planned violent acts on school grounds. And into the realm of science fiction, they add increasing use of thumbprint scanners, iris readers, and other biometric data gathering apparatus.
The authors are thorough in listing the intended benefits of all of these surveillance efforts—student safety, anti-bullying, food-service auditing, transportation efficiency, etc. Those benefits likely made the adopted surveillance an easy sell in schools that have gone this route. But on the other side of the fulcrum are two equally large areas of concern: privacy and equity. These issues are addressed by the report on a higher, more policy-oriented level. Privacy concerns are addressed in terms of which data are, by default, kept by schools (all of it) and for what length of time (indefinitely). The authors assert that without explicit record keeping policies (or unless the storage space runs out), there is neither will nor incentive to do anything but save the data. Additionally, there are unanswered questions, such as what constitutes a student’s “educational record” and by whom that data may be accessed. For example, details of disciplinary actions may be educational records, but what about the surveillance video that led to that disciplinary action? Equity concerns are addressed in terms of varying and unequal degrees of surveillance (high school kids who can afford cars are not monitored on the way home at all, for example) as well as inequitable “targeting” of surveillance techniques on certain students before anything actionable has occurred.
As a result of this rather wide gulf between facts and policy, even NASBE’s good and thorough list of suggestions for state boards to attempt to balance student safety, privacy, and equity concerns with policies seem more like a skateboarder’s efforts to catch up with a speeding train. Those recommendations are: 1) keeping surveillance to a bare minimum, including discontinuing existing efforts once they are no longer needed, 2) using surveillance only in proportion to the perceived problem, 3) keeping all surveillance methods as transparent as possible to students, parents, and the public, 4) keeping discussion of surveillance use and possible discontinuation thereof open to the public, 5) empowering students and parents to use surveillance data in their own defense when disputes arise between students or between students and staff, 6) improving broader inequities in schools so that there is less precedent for families to believe that surveillance is being used inequitably, and 7) training for state and local boards, administrators, teachers, and staff on all aspects of surveillance methods, data use, public records laws, etc.
Balancing students’ safety and their privacy is a difficult and sensitive job, and the recommendations enumerated here are good ones. But how many state board members have the bandwidth to address surveillance issues at that level of granularity? How many local board members (perhaps a more logical place for these decisions to be made)? And what happens when board member seats turn over? Legislative means of addressing these concerns are not even touched upon in this report.
In the end, it seems that the juggernaut of technology has spawned an unprecedented level of student surveillance, and diffuse, widespread fear for student safety—whether legitimate or not—serves only to “feed the beast.” As well-intentioned as this report and its recommendations are, even the most casual observer of today’s schools can’t help but conclude that the seesaw is definitely tipped toward more and more varied surveillance that is unlikely to be checked at the state policy level.
SOURCE: J. William Tucker and Amelia Vance, “School Surveillance: The Consequences for Equity and Privacy,” National Association of State Boards of Education (October, 2016).