High-achieving, well-behaved students learning to code, reciting Shakespeare, engaging in debates about the validity of climate science or the merits of Columbus Day, and taking advanced courses in a welcoming atmosphere—if this is what you see when you’re walking the hallways, it makes sense to call this a good school. Many experts, however, see schools differently. To them, the impact of the teachers and curriculum on the school’s students is the most important thing. In line with this vision, experts and policy wonks tend to lobby for greater focus on student growth measures when holding schools accountable, while families care most about the overall proficiency of the student body. Who is right?
The debate between “growth” and “proficiency” generates a lot of conversation in the education policy world, but what appear to be irreconcilable differences can be resolved if we acknowledge that each metric maps to a valid view of school quality, and that both types of metrics can serve worthwhile, if distinct, functions.
The wonk’s perspective
We wonks—the policy nerds, bureaucrats, and legislators who argue about and, ultimately, design the school ratings formulas that determine whether the school down the block deserves an “A” or an “F”—believe that a good school is measured not by how good its students already are when they enroll in the fall, but by how much the school itself positively impacts its students in a given year, whatever their incoming levels.
Unlike overall measures of student proficiency, which are largely determined by what types of students tend to enroll in a given school, wonks prefer growth measures—sometimes called “value-added” measures—that aim to isolate the impact of the school itself. Growth measures take students previous performance into account. This makes growth measures fairer because school ratings shouldn’t reward schools just because their students were already doing well when they started the year just as the ratings shouldn’t punish schools for working with students who have extra challenges. Growth measures capture the effect the school is having on whatever students it enrolls, prodding schools to do their best with high-achieving students, low-achieving students, and all the students in between.
After all, schools are not the only factor influencing student academic achievement. A large achievement gap between richer and poorer students is observable before students even begin formal schooling at age four or five. Once students start school, the average student spends less than a quarter of her waking hours at school or on schoolwork in a given year, limiting the school’s direct impact. And students transfer, meaning that the students in a school this year are not all the same as the students from last year. Wonks want to rate schools based on the impact of the school itself, and they appreciate that every fall, students who are in the same grades begin their studies at very different levels.
Growth for accountability, proficiency for information
There’s a problem, though: If growth is the fairest way to rate schools, why does proficiency—along with other measures based on the “level” of the students—persist as part of accountability systems? Perhaps it just doesn’t feel right to focus only on growth. If the magnet school down the block—with its app-inventing, Proust-quoting honors students—gets an "F" every year on the state report card, many assume the accountability system must simply be wrong.
In fact, skepticism of a single-minded focus on growth isn’t crazy, especially from a parent’s perspective. If parents have to choose between a high-growth school with low proficiency and a low-growth school with high proficiency, there are good reasons they might choose the latter. Parents might very rationally choose the school where students are high-achievers but make underwhelming progress because of something wonks tend to ignore: peer effects.
Recall that students spend only one-fourth of their time on schoolwork. With whom do they spend much of the remainder of their time? That’s right, with their friends, whose opinions they care about more than any adult’s. As Judith Rich Harris describes in her book The Nurture Assumption, one of the strongest levers parents can use to influence their children is to choose a neighborhood, place of worship, or—most obviously—a school where their youngster is likely to fall in with a good crowd. Children will tend to acclimatize to whatever group they fall in with, and the likelihood of falling in with a college-bound group is better at a school full of safe, relatively well-adjusted students, regardless of whether they are making spectacular academic growth every year. Wonks don’t have much to say about all of this because peer effects are hard to account for via policy, but parents understand these dynamics well.
Because parents understandably consider the overall level of student achievement at the school while student growth is the only fair way to hold schools accountable, the policy solution is simple: Both growth and proficiency (as well as other school stats) should be publicized for all to see, but accountability measures should be greatly—if not exclusively—based on the school’s impact on its students, i.e., on growth.
When deciding which neighborhood to move to or where to enroll their children, parents deserve a complete picture of the school’s performance in a comprehensible format, including overall achievement, how much growth the students are making, and other information. But when the state is praising and rewarding schools—or deciding which ones to shut down—the best metrics to use are those that focus on student growth.