The central problem with making growth the polestar of accountability systems, as Mike and Aaron argue, is that it is only convincing if one is rating schools from the perspective of a charter authorizer or local superintendent who wants to know whether a given school is boosting the achievement of its pupils, worsening their achievement, or holding it in some kind of steady state. To parents choosing among schools, to families deciding where to live, to taxpayers attempting to gauge the ROI on schools they’re supporting, and to policy makers concerned with big-picture questions such as how their education system is doing when compared with those in another city, state, or country, that information is only marginally helpful—and potentially quite misleading.
Worse still, it’s potentially very misleading to the kids who attend a given school and to their parents, as it can immerse them in a Lake Wobegon of complacency and false reality.
It’s certainly true, as Mike and Aaron say, that achievement tends to correlate with family wealth and with prior academic achievement. It’s therefore also true that judging a school’s effectiveness entirely on the basis of its students’ achievement as measured on test scores is unfair because, yes, a given school full of poor kids might be moving them ahead more than another school (with higher scores) and a population of rich kids. Indeed, the latter might be adding little or no value. (Recall the old jest about Harvard: Its curriculum is fine and its faculty is strong but what really explains its reputation is its admissions office.)
It’s further true that to judge a school simply on the basis of how many of its pupils clear a fixed “proficiency” bar, or because its “performance index” (in Ohio terms) gets above a certain level, not only fails to signal whether that school is adding value to its students but also neglects whatever is or isn’t being learned by (or taught to) the high achievers who had already cleared that bar when they arrived in school.
Yes, yes and yes. We can travel this far down the path with Mike and Aaron. But no farther.
Try this thought experiment. You’re evaluating swim coaches. One of them starts with kids most of whom already know how to swim and, after a few lessons, they’re all making it to the end of the pool. The other coach starts with aquatic newbies and, after a few lessons, some are getting across but most are foundering mid-pool and a few have drowned. Which is the better coach? What grade would you give the second one?
Now try this one. You’re evaluating two business schools. One enrolls upper middle class students who emerge—with or without having learned much—and join successful firms or start successful new enterprises of their own. The other enrolls disadvantaged students, works very hard to educate them, but after graduating most of them fail to get decent jobs and many of their start-up ventures end in bankruptcy. Which is the better business school? What grade would you give the second one?
The point, obviously, is that a school’s (or teacher’s or coach’s) results matter in the real world, more even than the gains its students made while enrolled there. A swim coach whose pupils drown is not a good coach. A business school whose graduates can’t get good jobs or start successful enterprises is not a business school that deserves much praise. Nor, if you were selecting a swim coach or business school for yourself or your loved one, would you—should you—opt for one whose former charges can’t make it in the real world.
Public education exists in the real world, too, and EdTrust is right that we ought not to signal satisfaction with schools whose graduates aren’t ready to succeed in what follows when those schools have done what they can.
Mike and Aaron are trying so hard to find a way to heap praise on schools that “add value” to their pupils that they risk leaving the real world in which those pupils will one day attempt to survive, even to thrive.
Sure, schools whose students show “growth” while enrolled there deserve one kind of praise—and schools that cannot demonstrate growth don’t deserve that kind of praise. But we mustn’t signal to students, parents, educators, taxpayers or policymakers that we are in any way content with schools that show growth if their students aren’t also ready for what follows.
Yes, school ratings should incorporate both proficiency and growth but should they, as Mike and Aaron urge, give far heavier weight to growth? A better course for states is to defy the federal Education Department’s push for a single rating for schools and give every school at least two grades, one for proficiency and one for growth. The former should, in fact, incorporate both proficiency and advanced achievement, and the latter should take pains to calculate growth by all students, not just those “growing toward proficiency.” Neither is a simple calculation—growth being far trickier—but better to have both than to amalgamate them in a single less revealing grade or adjective. Don’t you know quite a bit more than you need to know about a school when you learn that it deserves an A for proficiency and a C for growth—or vice versa—than simply to learn that it got a B? On reflection, how impressed are you by a high school—especially a high school—that looks good on growth metrics but leaves its graduates (and, worse, its dropouts) ill-prepared for what comes next? (Mike and Aaron agree with us that giving a school two—or more—grades is more revealing than single consolidated rating.)
We will not here get into the many technical problems with measures of achievement growth—they can be significant—and we surely don’t suggest that school ratings and evaluations should be based entirely on test scores, no matter how those are sliced and diced. People need to know tons of other things about schools before legitimately judging or comparing them. Our immediate point is simply that Mike and Aaron are half-right. It’s the half that would let kids drown in Lake Wobegon that we protest.