U.S. News & World Report’s annual Best High Schools Rankings are a source of great pride to some schools, and great consternation to others. Schools, in addition to earning a place on the publication’s hierarchical list, can also earn a “Bronze,” “Silver,” or “Gold” medal. Yet the way in which these honors are determined is puzzlingly problematic.
While a school’s math and reading proficiency rates, as determined by state exams, plus its graduation rates, can net the school a “Bronze” ranking, attaining a Silver or Gold ranking also requires a “College Readiness Index” (CRI) score at or above the median of all high schools rated, 75 percent of which is determined by “performance” on AP Exams. Performance, however, is not measured by a straightforward exam passage rate (i.e., the number of exams passed, out of the number of exams taken) or even by an average score, but by the percentage of all graduates who pass at least one AP exam.
A performance metric that calculates a success rate on a task that includes people who haven’t even attempted that task seems like an odd way of measuring success. Yet this same metric, sometimes called the Equity and Excellence (E&E) rate, is also used by the College Board to rank the AP performance of states, and is featured prominently in the organization's annual AP reports. The Washington Post also uses the metric alongside its Challenge Index in Jay Mathews’ America's Most Challenging High Schools rankings.
While the E&E rate does represent equity by encouraging schools to expand AP access, and while it could represent excellence, it is problematic as a college readiness indicator for a number of reasons having to do with the measure’s numerator—the number of graduates who pass at least one AP exam—as well as its denominator—the total number of graduates. Let’s take each in turn.
First, a school’s E&E rate does not increase when students pass multiple AP exams because the numerator only credits one passed exam per graduate. A student who passes four out of five exams, for example, has the same effect on the rate as a student who passes one out of five. Yet surely the former is better prepared than the graduate who also failed multiple exams in order to pass just one. If your child only passed one of five classes his freshman year of college, would you think that he was ready? Of course not.
Second, a school’s E&E rate credits all passing scores equally. AP exams are scored out of five possible points, with 3, 4, and 5 all constituting passing scores. But the E&E rate doesn’t differentiate between these, even though the College Board has always maintained that higher scores mean better preparedness for advanced college level work. And indeed a nontrivial number of colleges, at least among the more competitive ones, are restricting AP credits in various ways, including only granting credit for exam scores of 4 and 5. And some colleges that still grant credit for 3s give students more credits if they earn a 5.
How then can schools increase their E&E rate and thus climb the U.S News rankings? Besides a drop in graduation rate (which ironically would boost the rate by decreasing the denominator!), schools have to either get more students to attempt an AP exam or get a higher proportion of the same number of students to pass one. One way the latter can be accomplished is simply by convincing students to take more exams, as the odds of passing just one increase with more attempts.
Indeed, because the denominator excludes the number of failed exams, the E&E rate, and thus the CRI, will increase even as the failure rate rises significantly, as long as more students are taking more exams.
Table 1 shows how much the failure rate can increase at a school before the E&E rate takes a hit, in relation to the rate of AP expansion. As long as the failure rates stay below the values in the second column, the E&E rate will still go up, boosting a school’s ranking.
Table 1. How much exam failure rates can rise before the E&E rate declines, as a function of the increase in exams taken, starting with an initial 40 percent failure rate.
Percentage increase in test takers | Failure rate | Percentage increase in failed exams |
10 | 45.5% | 13.6% |
25 | 52.0% | 30.0% |
50 | 60.0% | 50.0% |
75 | 65.7% | 64.3% |
100 | 70.0% | 75.0% |
Lest the reader think that these percentage increases in test takers are unrealistic, consider that the total number of students taking AP exams has increased by 95 percent over the last ten years, coinciding with a 103 percent increase in exams taken. For example, in my own district in Douglas County, Georgia—whose stunningly low AP exam scores accompany bizarrely high course grades—the number of exams increased by over 200 percent in a five-year period! Even though the passage rate plummeted from a low 42 percent to an even lower 23 percent, and the mode score dropped from 2 to 1, the district’s E&E rate increased by as much as 62 percent simply because more graduates were taking, and thus passing, at least one exam.
It is worth noting that schools with high levels of exam performance can and do get a high CRI, and thus get a high ranking. Of the top ten ranked high schools in the country, eight had real passage rates higher than 80 percent. And schools with dismal exam performance certainly can be excluded (no schools in Douglas County, Georgia, for instance, got a Silver ranking).
Nevertheless, it’s also possible for schools with very low exam scores to still get the prestige of a Silver ranking by boosting participation and/or the number of exams taken per student. Table 2 demonstrates what it takes for a school to reach the median CRI, which was 20.91 for this year’s rankings.
Table 2. Real AP exam failure rates that will result in U.S. News’s median CRI of 20.91, as a function of participation rate and exams per participant.
Participation rate | E&E rate* | Real failure rate (one exam/participant) | Real failure rate (two exams/participant) |
25 | 19.5 | 21.8 | 60.9 |
33 | 16.9 | 48.8 | 74.4 |
50 | 11.2 | 77.6 | 88.8 |
67 | 5.5 | 91.7 | 95.9 |
75 | 2.9 | 96.2 | 98.1 |
*U.S. News calls this the “quality-adjusted participation” rate; CRI = 0.25 x participation rate + 0.75 x E&E rate.
Keep in mind that participation rate constitutes 25 percent of the CRI formula, so this alone has a notable effect on the measure. Schools can reach the median threshold and qualify for Silver even with very high failure rates, depending on how many students take exams and how many exams they take.
The highest CRI among high schools in Douglas County, for example, was 19.1—just short of the median. Applying this analysis, how could this school get the coveted Silver ranking? Besides simply raising its participation from 37 to 45 percent, it could also raise its E&E rate from 13 to 15.4, which would require having just 20 percent of participants take one additional exam, assuming a stable passage rate—which is really just another way of saying that the exams/participant ratio has to increase by 20 percent. Would this mean that its graduates are better prepared for college?
The “equity” portion of the Equity & Excellent Rate is appropriate because it rewards schools for expanding AP participation, which some claim can boost college preparedness, even for students who don’t pass the exams. However, the “excellence” half the E&E rate is far from truth in advertising, because it doesn’t reward the highest levels of exam achievement and, in some cases, celebrates schools with very high failure rates. Surely there are better ways to use AP exams to rate the college readiness of a high school’s graduates—ways that reward the highest levels of achievement and discourage increasing failure rates, while also promoting growth in access.
At best, the E&E rate should be seen as a well-intentioned starting point for measuring the availability of AP courses in schools. But U.S. News & World Report, and the stakeholders that rely on their rankings to judge schools, should explore other metrics of college readiness and AP performance. An excellence metric that complements the E&E would provide important information that parents, administrators, policymakers, and the public need to understand the extent to which graduates are truly college ready.
Jeremy Noonan is a father of four school-age children and a certified science (including in AP Chemistry) instructor in Georgia, who has taught for ten years in public school, private school, and home school cooperatives. He also runs Citizens for Excellence in Public Schools, a local education advocacy group.
The views expressed herein represent the opinions of the author and not necessarily the Thomas B. Fordham Institute.