The problem with graduation rate statistics
We propose a new way of attributing graduation rates
We propose a new way of attributing graduation rates
Ohio’s largest online school, the Electronic Classroom of Tomorrow (ECOT), has recently caught flack for its low graduation rate. A New York Times article, for example, averred, “Publicly funded online schools like ECOT have become the new dropout factories.” It is true that a mere 39 percent of the ECOT’s class of 2014 graduated in four years, meaning that thousands of pupils failed to reach the high school finish line on time. Meanwhile, a recent GradNation report called out the low graduation rates of some alternative, charter, and virtual schools (for a deeper dive into the charter school rates, see Susan Aud Pendergrass’s excellent piece on Flypaper). For some, these statistics are proof positive of educational failure.
We are in no way defending “dropout factories” of any stripe. It’s well known that Ohio’s virtual schools (like those in almost every other state) have struggled mightily to demonstrate an impact on student growth, and we’ve made no secret of our own misgivings about ECOT and many of its peers. But when it comes to graduation rates, how much of the blame belongs to the schools themselves? Is it possible that the way these numbers are calculated yields results that are, one might say, falsely deflated?
Like other states—and as required under federal regulations—Ohio uses something called the adjusted cohort graduation rate (ACGR). This method looks at a school’s ninth-grade cohort and tracks how many of those students graduate within four years (or five, if using a five-year statistic). Sounds straightforward, right? But because pupils transfer into and out of schools between the start of ninth grade and the end of twelfth grade, the cohort composition can change over time. Hence, to calculate the baseline number of students for which each Ohio school or district is held accountable, incoming transfers are added into the cohort while outgoing transfers are subtracted—provided they’ve enrolled in another school (i.e., not dropped out).
If there were no transfers, it would be reasonable to hold high schools fully accountable for the graduation of pupils in their ninth-grade cohorts. But when a school’s pupils are highly mobile or transitory, accountability for their graduation gets more complicated. These cohort adjustments—while not unreasonable at first blush—could unfairly cast blame on a school when its late-transferring (in-bound) students fail to graduate on time. (The reverse is true as well—we could mis-assign credit for graduation even though the school had little to do with it.)
Consider a hypothetical student who spends her first three years of high school attending one of the Columbus City Schools before spending her fourth enrolled in an online school. Under the ACGR method, the school of record at the end of her senior year would be held accountable; thus, if she doesn’t graduate on time, the online school would lose credit. But what if, at the time of transfer, she was far behind academically—credit deficient and not on track for passing Ohio’s graduation exams? Should her last school of attendance shoulder the entire blame when just one-quarter of her high school career was spent in it? Of course not, but that’s the current accounting practice.
This scenario is quite likely in schools that serve lots of mobile students, including those at greatest risk of dropping out. And if, as it claims, ECOT (and schools like it) enrolls many transfer students who are far behind academically, then its low graduation rate as calculated via ACGR might not be an accurate gauge. How many students did it educate throughout their entire high school careers? How much of its final “adjusted cohort” transferred in much later? We don’t know, and that’s what makes accountability based on graduation rates a topic of legitimate contention.
Consideration ought to be given to how schools should be held accountable for those who arrive (or depart) during high school. (This would be good for all public high schools in the state, not just ECOT and its ilk.) One approach is to apportion responsibility according to the duration of a student’s enrollment at a particular high school. In the example above, the Columbus high school would be held accountable for 75 percent of the non-graduation of the student in question, while the online school would be held 25 percent accountable. Relative to the ACGR method, the district’s high school would face a penalty for not ensuring that the student remained on a sure graduation track from ninth grade through eleventh grade, while the last school of attendance would gain partial reprieve because it only educated the student for one-quarter of her high school career. Another alternative would assign full responsibility to the school where a student is enrolled for the greatest amount of time from ninth grade through twelfth grade.[1] Compared to current practice, either option would more fairly apportion to schools the credit or blame for students’ graduation or lack thereof.
A more refined approach to graduation accountability might also better align school incentives with student interests. Current practice tempts schools to nudge low-achieving students toward an alternative school in order to avoid accountability for non-graduation and artificially inflate the “sending” school’s own graduation rate. (Such alternatives could include dropout-recovery charter schools—sometimes sponsored by a district—or online schools.) We don’t know how often this occurs.[2] But if we assign schools partial (if not full) responsibility for non-graduation, they won’t be able to duck accountability by counseling out struggling pupils who might actually be better served in their current schools.
High school graduation rates will continue to serve as workhorse accountability measures under the revised federal education law (ESSA). The law requires a graduation rate “indicator” on school report cards, and states must intervene in schools with low graduation rates. To be sure, federal laws and regulations might eventually forbid a different approach to calculating graduation rates. But Ohio policy makers should at least press federal officials for the flexibility to implement an alternative graduation rate measure. A proper accounting method would better ensure that schools are fairly assigned responsibility for students’ graduation—and align incentives in a way that could help more students earn their diplomas.
[1] ESSA includes a provision that requires, in certain circumstances, a school district (or charter school) to assign accountability for a student’s non-graduation to either the school in which she spent the greatest proportion of time from grade 9–12 or to the school in which she was most recently enrolled.
[2] An example of a school withdrawing students to enroll in district-sponsored “digital academy” can be found in the auditor of state’s “Interim Report on Student Attendance Data and Accountability System,” (October 4, 2012), page 21.
Since President Obama signed the Every Student Succeeds Act (ESSA) in December, much discussion has centered on changes related to school accountability. Under the new law, a state’s accountability plan must include long-term goals, measures of progress toward those goals, and an explanation of how the state plans to differentiate schools. This revised system would replace the accountability plans that states developed under their still-operational NCLB waivers, and it would take effect during the 2017–18 school year. ESSA’s accountability requirements also involve the dissemination of annual report cards for the state, districts, and schools that contain a variety of accountability indicators and a plethora of data.
NCLB also required school report cards, so the idea itself is nothing new. What’s changed is what the report cards contain. For instance, NCLB required states to include information on state assessment results, the percentage of students not tested, graduation rates, and performance on adequate yearly progress measures. ESSA moves away from adequate yearly progress while mandating four types of indicators: achievement, another academic measure (probably growth for elementary and middle schools and graduation rates for high schools), progress for English language learners, and “other indicators of school quality and student success.”
Furthermore, ESSA calls for a tremendous amount of new data to be reported on these report cards. For example, NCLB already required states to disaggregate achievement data according to race and ethnicity, gender, English language proficiency, migrant status, disability status, and low-income status. ESSA adds homeless students, foster care students, and children of active duty military personnel to this list of subgroups. ESSA also requires states to include the following data in their report cards (for information purposes only, not as an element used in formal accountability): disaggregated results that are already reported to Civil Rights Data Collection, including exclusionary discipline rates and chronic absenteeism; the professional qualifications of educators (which were already required by NCLB); federal, state, and local per-pupil expenditures; the number and percentage of students with the most significant cognitive disabilities taking alternate assessments, by grade and subject; where available, disaggregated rates of students who graduate from high school and enroll in higher education; and, at the state level only, results of NAEP compared to the national average.
This laundry list of required data reporting makes it clear that while ESSA may have reduced the number of mandates for states, it greatly increased the number of things they must report on. Even for states like Ohio, which already has a robust state accountability and report card system, meeting ESSA’s report card mandate will likely take more work than their current systems—though perhaps necessary work, since the data will shine a light on long-darkened areas (like school funding).
In general, the report card system Ohio established to comply with NCLB is already on the right path toward fulfilling ESSA statute. Ohio’s report cards currently include indicators that report on student achievement, student progress, gap closing, graduation rates, K–3 literacy, student preparation for college and career, and gifted student performance. Much (but not all) of the data ESSA requires to be reported already is, and since the system complies with NCLB’s disaggregation mandates, Ohio already disaggregates for the majority of ESSA’s required subgroups (but will need to add additional ones).
So what changes will Buckeye policy makers need to make—or seize the opportunity to make—in order to fully comply with the new federal law and its accompanying regulations? Though the chart below is not comprehensive, it offers a glance at some of ESSA’s major requirements for report cards compared to Ohio’s current system. The shaded areas are issues where Ohio appears to have the most work to do as it revises its accountability plan.
ESSA requirement for state report card | Ohio’s current report card equivalent | Next steps for Ohio |
Information on student achievement on the state’s chosen academic assessments for all students, disaggregated by subgroup.[1] | -The Achievement indicator, which includes Indicators Met (how many students have met the state’s minimum proficiency level) and Performance Index (the achievement level of each student—districts and schools are awarded points based on every student’s level of achievement.) | Update both measures in the Achievement Indicator so that they disaggregate data for ESSA’s additional required subgroups. |
For public elementary and middle schools, information concerning student performance on another academic indicator for all students and disaggregated by subgroup. States are permitted to use a measure of student growth or “another valid and reliable statewide academic indicator that allows for meaningful differentiation in school performance.” | -The Progress indicator, which is composed of value-added data for grades 4–8 in math and reading. (High school value added was reported starting in 2014–15 and will become a formal part of the value-added rating system in 2015–16.) This indicator is intended to measure how much each student learns in a year, disaggregating data into three student subgroups: gifted students, students with disabilities, and students in the lowest 20 percent of achievement statewide. -The Gap Closing indicator, which measures the academic performance of specific demographic groups and compares them to the collective performance of all students in Ohio. This indicator is intended to gauge whether schools are closing achievement gaps. | - The Progress indicator almost certainly fulfills the student growth option. - The Gap Closing indicator could fit the description of “another valid and reliable statewide academic indicator that allows for meaningful differentiation in school performance.” - Update all indicators so that they disaggregate data for ESSA’s additional required subgroups. |
For public high schools, information on graduation rates for all students, disaggregated by subgroup. | -The Graduation Rate indicator, which measures how many students graduate within four or five years of entering ninth grade for the first time. -ESSA requires states to use the four-year adjusted cohort graduation rate (Ohio does) and permits states to use the extended-year adjusted cohort graduation rate (Ohio also grades the five-year graduation rate). | Update the indicator so that it disaggregates data for ESSA’s additional required subgroups. |
Information on the number and percentage of English learners achieving English language proficiency. | None | -ESSA requires that districts provide “an annual assessment of English proficiency of all English learners” and that the assessment must be “aligned with the state’s English language proficiency standards.” -ESSA requires that all report card indicators are annually measured except progress in achieving English language proficiency. This means that the state will not have to add a graded measure to its report card—it will just need to report the data from the annual assessment. |
Information on no less than one indicator of school quality or student success for all students, disaggregated by subgroup. This could include measures of student engagement, educator engagement, student access to and completion of advanced coursework, post-secondary readiness, or school climate and safety. | The Prepared for Success indicator, which contains six distinct measures: college admission tests; College Credit Plus; industry credentials; honors diplomas awarded; Advanced Placement (participation rate and percentage scoring three or above); and the International Baccalaureate Program (participation rate and percentage scoring four or above). This measure is scheduled to become an A–F rated component on Ohio’s report cards in 2015–16. | -To fulfill this mandate, ESSA permits both a measure of student access to and completion of advanced coursework and a measure of post-secondary readiness. The Prepared for Success indicator seems to meet both of these criteria. -Update the indicator so that it disaggregates data for ESSA’s additional required subgroups. |
Information on the progress toward state-designed long-term goals for all students, disaggregated by subgroup. | None | Under Ohio’s NCLB waiver, the Gap Closing indicator would have covered this requirement. As part of the transition to ESSA, Ohio will need to craft revised long-term goals and determine how to measure the progress of all of its students and its subgroups against those goals. It’s possible that the Gap Closing indicator can still be used. |
Information on the percentage of students assessed and not assessed, disaggregated by subgroup. | None | Per NCLB, untested students are included on Ohio’s report cards. The state will have to begin to disaggregate for ESSA’s additional required subgroups. |
Of course, guidance for states in regards to ESSA doesn’t stop with federal statute. The Department of Education (USDOE) recently released its proposed regulations. The chart below outlines a few of USDOE’s proposed regulations and determines whether Ohio’s current report card system fits within them. If these regulations become final, areas where Ohio will need to make changes are shaded:
Proposed Regulation | Ohio’s status | Next steps for Ohio |
When sharing information about student achievement on the state’s chosen academic assessments, states must measure proficiency rates—and proficiency rates alone. | Ohio’s Achievement indicator currently includes Performance Index, which doesn’t seem to match what the proposed regulations allow as a measurement of student achievement. | If it turns out that Performance Index doesn’t fulfill the proposed regulations, Ohio would need to establish a student achievement indicator that fulfills the regulations’ mandate to measure only proficiency rates and ensure that the new measure disaggregates data for ESSA’s additional required subgroups. Ohio could choose to make Performance Index its own, separate indictor in addition to the required achievement indicator. |
States must assign a comprehensive, summative rating for each school. | Ohio already plans to implement an overall rating for each district and school starting in 2018. | Ensure that the state implements overall ratings on schedule. Carefully consider how to weight each indicator. |
States must report individual schools’ performance on each indicator. | Ohio already does this with its current indicators. | Ensure that the performance of individual schools is reported for each selected indicator. |
Indicators of Academic Progress and School Quality or Student Success must be supported by research indicating that performance or progress will likely increase student achievement or graduation rates. | Ohio doesn’t explicitly explain how the measures on its current report cards would increase achievement or graduation rates. | Many of Ohio’s current indicators and measures have research to back them up; ODE will likely need to include links to relevant research. |
States must use a minimum of four distinct indicators for accountability and must use the same measures within each indicator for all schools. | Ohio currently has six distinct indicators and uses the same measures for all schools, except for its dropout-recovery charter schools and career and technical education districts. | Determine how (if necessary) to include dropout-recovery charters and career and technical education districts in the system. |
States must establish at least three distinct performance levels on each indicator. | Ohio assesses performance of districts and schools on an A–F grading scale, which equates to five performance levels. | Retain the current performance levels. |
States must include all public charter schools in their accountability systems. | Ohio already does this, though dropout-recovery charter schools are part of an alternative accountability system. | Determine how (if necessary) to include dropout-recovery charters in the system. |
The dissemination of state report cards must occur no later than December 31 each year, beginning in the 2017–18 school year. | Ohio’s report cards are typically released in August. | Retain the current system. |
Report cards must include the percentages of students performing at each level of achievement, by grade, on reading, math, and science tests. | Ohio already does this on the Performance Index portion of the Achievement Indicator. | Retain the current Achievement Indicator. |
Luckily, ESSA contains provisions that allow states and districts to use report cards that were already in effect and to reduce the “duplication of effort by obtaining the information required…through existing data collection efforts.” These provisions could be interpreted as confirmation that Ohio can revise what it’s doing rather than start from scratch, so long as districts heed the legal requirement to make their report cards accessible to the public on their websites. One can hope that’s so, because a directive for Ohio to jettison a perfectly acceptable state model in the name of a federal law intended to return power to states is, well, a whole new kind of federal meddling.
[1] States are tasked under ESSA with determining the appropriate number of students that must be in a group before it is disaggregated.
In April, the Government Accountability Office (GAO) released a report examining recent trends in the racial and socioeconomic composition of America’s public schools. Between the 2000–01 and 2013–14 school years, the study finds, the fraction of U.S. schools that were both high-poverty (75 percent or more eligible for free or reduced-price lunch, or FRPL) and high-minority (75 percent or more African American or Hispanic students) rose from 9 to 16 percent.
While the GAO analysts caution that their analyses “should not be used to make conclusions about the presence or absence of unlawful discrimination,” to headline writers at the Washington Post, USA Today, and the Los Angeles Times, the findings suggest “resegregation” in American schools. The Post editorial board declared a “resurgence of resegregation.” But is this a fair interpretation?
There are at least two problems with drawing such a conclusion. The first is that the GAO analysis doesn’t take into account overall demographic trends. During this time period, student demographics were changing in America. As a share of the national student population, Hispanic students increased from 16 percent to 25 percent from 2000 to 2014 (though African American pupils remained virtually unchanged as a fraction of the population). Due to the increase in the Hispanic population, we shouldn’t be surprised that the percentage of high-minority schools increased over time as well.
A more refined gauge of whether schools are becoming increasingly segregated over time—or, in more neutral language, “increasingly racially isolated”—would examine how the demographic makeup of America’s schools compares to the national composition of students. The more “dissimilar” a school’s makeup is relative to the overall population, the more racially isolated it is—and vice versa. Applying this type of analysis in a recent issue of Education Next, economist Steve Rivkin found no evidence that school segregation increased from 2000 to 2012, at least for African American students.
The second problem is the GAO’s use of FRPL eligibility as a marker for poverty. That’s important because the fraction of pupils eligible for FRPL also noticeably increased during the study period—from 38 percent to 52 percent. The increase in FRPL eligibility could be partly (or even largely) attributable to the 2010 enactment of the Community Eligibility Provisions, a program that allows qualifying districts to deem non-low-income kids as FRPL-eligible. (Rising childhood poverty, or near-poverty, because of the recession and its aftermath could also be a contributor.) Given these trends, one might naturally expect to see more U.S. schools meeting the 75 percent thresholds that GAO uses to identify an economically isolated school.
In sum, Matt DiCarlo of the Shanker Institute asks about the GAO findings, “Does it really represent ‘resegregation?’ Not necessarily.” Almost everyone can agree that school segregation is an important but complicated matter. Let’s not reduce it to a headline based on a questionable interpretation of the data.
SOURCE: “Better Use of Information Could Help Agencies Identify Disparities and Address Racial Discrimination,” United States Government Accountability Office (April 2016).
Ohio’s second-ever school district CEO was chosen at the end of May by the members of the Youngstown City Schools Academic Distress Commission (ADC). He is Krish Mohip, a former teacher and principal and current school administrator in Chicago. He has a track record of turning around low performing schools in the Windy City and make no mistake that that is his charge in Youngstown as well.
Mohip was chosen from a field of nearly three dozen candidates and was introduced to Youngstown stakeholders and the public last week. So far he is enthusiastic, effusive, and inclusive. He told WFMJ-TV that he is thrilled to be in Youngstown and can’t wait to get to work gathering input and working with the ADC, teachers, parents, the elected school board, and the community to create the academic improvement plan that is his required first order of business. In an in-depth interview with Vindy Radio last Friday, Mohip was thoughtful and engaging but clear on his goals: all parents want the best for their children, all children can learn, and it is the schools’ job to make that learning happen. We are encouraged by Mohip’s track record and enthusiasm for the work, an attitude sorely lacking in Youngstown for the last few years. Importantly, while he says that HB 70 – the legislation which created the new ADC and the CEO position – “will not save the schools”, he does know that it forms the basis of his charge and is the measuring stick against which his success or lack thereof will be judged. There are a number of open questions about the extent of the CEO’s power, especially with regard to the elected school board. Luckily, Mohip seems willing to define the process for himself while keeping a steady eye on what constitutes success: true academic improvement for all kids and schools. Check out this extended interview with Mohip for more of his insights.
We are hopeful that the entrenched interests which have thrived during the many years of academic failure in Youngstown will have no choice but to get on board with the new paradigm. We urge the parties involved in ongoing litigation against HB 70 and the Academic Distress Commission to stand down and join Krish Mohip as he begins his vital work. We will watch the unfolding process closely, especially as Youngstown could become a template for other districts in Ohio.
This week Ohio Auditor Dave Yost visited United Preparatory Academy (UPrep), a high-performing elementary charter school in the Franklinton neighborhood of Columbus. UPrep is part of the United Schools Network of charter schools whose middle schools and CEO, Andy Boy, were profiled recently by the Columbus Dispatch (“Charter school producing hoped-for results” and “Charter school stands out”).
The middle schools serve students who are over 95 percent and 82 percent economically disadvantaged, respectively; yet eighth graders at both middle school campuses outscored statewide averages for both reading and math proficiency by margins that the Dispatch calls “eye-popping.” UPrep serves students in grades K–2 and will be expanding to the third grade in the fall (and eventually up to fifth grade).
Auditor Yost toured the UPrep campus and visited classrooms. He also met with Andy Boy, who described the network’s future plans, the challenge of securing school facilities, and the overall impact that the schools have made on student outcomes as well as the neighborhoods in which they are located.
“Charter schools are accustomed to doing more with less. In the case of United Preparatory Academy, they’re doing a lot more with less—and doing it extremely well,” Auditor Yost said. “This is an impressive environment for learning. These students are fortunate to be here.”
Auditor Yost has long been a voice for quality in Ohio’s charter school movement. His recent attendance audits at Ohio charters and district schools underscore the need to improve how all schools—but particularly those serving uniquely challenged populations with high mobility rates and lower-than-average attendance—account for students and receive funding for them. He was also a vocal supporter of the commonsense reforms in HB 2 and will be hosting an inaugural charter school summit this August to share best practices in a range of areas relevant to the charter community. Yost, who is a self-described “strong proponent of the charter school movement” and school choice broadly, recently was named one of ten national Champions for Charter Schools by the National Alliance for Public Charter Schools.
CEO Andy Boy said of the auditor’s visit, “We’re happy to have hosted Auditor Yost at UPrep and are appreciative of his leadership in advocating on behalf of high-quality charter schools. The auditor and I share in the belief that public schools must be careful stewards of tax dollars and that all students—regardless of ZIP code—deserve access to high-quality schools.”
Kudos to Auditor Yost for visiting the school and wanting to see firsthand one of Ohio’s top-performing charter schools. Places like UPrep and the other schools in the United Schools Network serve as a proof point of what’s possible in urban public education, making a life-changing impact for students living in some of Columbus’s poorest neighborhoods. Congratulations to United Schools Network for their well-earned recent attention and recognition from Auditor Yost.
The Every Student Succeeds Act (ESSA) requires states to incorporate at least one non-academic indicator—which might include (but isn’t limited to) factors like school climate or safety—into their accountability frameworks. That makes this study published in Educational Researcher rather well-timed. The authors set out to test the theory that reductions in school violence and/or improvements to school climate would lead to improved academic outcomes. Instead, the evidence they discovered suggests the relationship flows in the opposite direction: A school’s improvement in academic performance led to reductions in violence and improved climate—not the other way around.
The study’s authors point to serious gaps in past studies of school climate and safety, many of which illustrated only correlation (not causation) among the variables examined. This motivated them to test the assumption that improved school climate must come first in the chicken-egg scenario. Using six years of student survey results (from 2007–13) from a representative sample of 3,100 California middle and high schools, analysts employed a research design known for its ability to test causality when large-scale experimental designs aren’t possible. (For the curious, this is described as a “cross-lagged panel autoregressive modeling design,” which determines whether variables at different points in time are correlated with or impact one another). They looked at three waves of survey data based on student reporting of school violence and school climate, along with schools’ academic performance (as measured by California’s academic performance index). Controlling for each variable’s relationship to the others, the analysts examined whether gains in one time period would lead to improvements in another. For example, do improvements in school safety later lead to better academic outcomes, and/or vice versa?
Not surprisingly, the study confirms that school violence and climate are closely associated. Like past studies, it also confirms that low levels of violence and positive school climates are associated with high levels of school performance. But the characteristics of a safe and positive school aren’t necessarily a prerequisite for higher achievement. Researchers found that higher school performance in the first wave of data (2007–09) led to lower school violence and higher school climate ratings in the second wave of data (2009–2011). This pattern remained true for the third wave of data (2011–2013). Meanwhile, they found no evidence that reducing violence or improving school climate first led to increased academic performance across the time periods studied. (They hypothesized, however, that when schools undertake academic improvements, they might automatically include “issues of climate and victimization” as part of reform efforts.)
The researchers concluded that school academic improvement is “a central factor in reducing violence and enhancing a school’s climate.” To explain the findings, they noted that teachers who hold high expectations for students academically may have more positive relationships with them generally. In addition, one can imagine improved teaching contributing to a more positive school culture overall. For example—as any teacher can attest—better instruction diminishes time spent off task and the misbehavior associated with it. Without further study, however, it’s difficult to know exactly how improved academic outcomes led to better climates and lower violence in the schools studied—or to what extent it was the better teaching and school leadership driving school improvement to begin with. One is also left wondering how much academic achievement can be boosted in a school with a negative culture and unsafe corridors. Still, this is an interesting study lending credence to the idea that school improvement efforts must focus on academic outcomes as much as—or at least simultaneous to—attempts to improve climate and safety.
Source: Rami Benbenishty, Ron Avi Astor, Ilan Roziner, and Stephanie L. Wrabel, “Testing the Causal Links Between School Climate, School Violence, and School Academic Performance: A Cross-Lagged Panel Autoregressive Model,” Educational Researcher (April 2016).