Editor's note: This post is a submission to Fordham's 2018 Wonkathon. We asked assorted education policy experts whether our graduation requirements need to change, in light of diploma scandals in D.C., Maryland, and elsewhere. Other entries can be found here.
The national high school graduation rate has risen from 73 percent to 84 percent in the decade since 2006, which is good news. But the veracity of this increase has been called into question, in part because twelfth grade NAEP reading and math scores have stayed flat during this time.
There’s more than a little suspicion that at least some of the grad rate increase is a by-product of the heightened focus on grad rate by states and districts over this period, which gave schools and teachers the incentive to let marginal students earn course credits. Unlike many other academic metrics, the standards for student grades and course credits are subjective, and are largely determined at the school level.
Yet ESSA doubles down on graduation rate as a core academic performance metric. States are required to include it in their accountability framework, and it is also the sole metric states must use to identify high schools eligible for “comprehensive support.”
Ironically, a convincing case can be made that that the graduation rate metric used under federal guidance, the four-year adjusted cohort graduation rate (ACGR), does not even meet ESSA's own standards for being a valid indicator of “meaningful differentiation” between schools.
Under ESSA, the academic indicators states use for their accountability frameworks must “provide for fair, valid, and uniform annual meaningful differentiation across schools and LEAs.”
It is hard to argue that ACGR meets this standard. The lack of a uniform standard for earning grades and course credits from school-to-school and state-to-state and the large variation in the level of skills and knowledge signified by a diploma from different high schools make it very hard to compare grad rate “across schools and LEAs.”
There is, however, flexibility under ESSA that states can use to increase the uniformity, validity and integrity of the graduation rate metric.
For instance, an inherent problem in calculation of ACGR is that non-graduating students are attributed to the cohort of the last school they attended, even if they spent only a few days at the school. So a student who spends three-plus years at his neighborhood school, falls behind, and then transfers to a charter school in spring semester of his senior year would be counted against the grad rate of the charter school simply because it was the school he attended when the four-year clock had expired.
The school where he fell behind for three-and-a-half years escapes all accountability. This is a powerful incentive for high schools to game the system by counseling students at risk of not graduating in four years to enroll in another school (often a virtual program.)
The “partial attendance provision” of ESSA, (Section 1111(c)(4)(F)) can make it harder to play this game. Students who have not spent at least half of a year at a school before dropping out (or failing to graduate when the four-year clock is up) are reverted to the graduating cohort of a previous high school they attended for the purpose of calculating graduation rate.
States can use this provision to reduce the incentive for high schools to play the game of “counselling out” students. Unfortunately, there is little evidence in state ESSA plans that they plan to take advantage of this provision.
Under ESSA, all high schools with a grad rate under 67 percent are required to be identified as eligible for “comprehensive support and improvement,” which is equivalent to labelling them as failing schools.
For most traditional high schools in which the student population is very stable, a grad rate of below 67 percent would indeed indicate a problem. But for schools with a highly mobile student population, such as some low income urban schools and many virtual high schools, it might not.
Here’s why: Imagine a school in which half its students enrolled as juniors and were a full year behind in credit when they enrolled. Certainly none of these students would graduate with their cohort, but say that every one of them accumulated credits at a normal pace from the moment they enrolled. This school would have an ACGR no higher than 50 percent. But is it a failing school?
Probably not. Further evidence is required to make that judgment. Just looking at ACGR can give a false signal when looking at schools with high levels of student mobility.
Luckily, ESSA now gives states the flexibility to not be slavishly tied to ACGR when making the judgment of which schools to include on the list of comprehensive-support high schools. The statutory wording in the relevant section does not actually specify that ACGR is the metric that must be used.
Every other section in ESSA that refers to grad rate does use the term-of-art “ACGR,” but not the section on identifying schools for comprehensive support. That section just uses the phrase “fails to graduate two-thirds of its students.”
In the draft accountability rules adopted in 2017, the DOE did specify that ACGR had to be the metric for that purpose. But those rules are of course now gone compliments of the Congressional Review Act.
So states now have the flexibility to look a little deeper than ACGR when identifying schools for comprehensive support. Taking advantage of this flexibility could prevent the waste of scarce resources on schools whose low ACGR is just an artifact of a highly mobile student population.
It is likely that graduation rates under ESSA will continue to rise—and will continue to be viewed with skepticism. When the standards for earning credit are so subject to manipulation and the intricacies of how ACGR is calculated is so subject to gaming, its usefulness as a reliable indicator of performance is diminished.
But states can take a step towards making grad rate a metric that is “fair, valid, and uniform ... across schools and LEAs” by taking advantage of the flexibility that ESSA provides.