In a previous post, I referred to New York’s fierce political battle over teacher evaluations. Since then, New York lawmakers have passed the education portion of the budget—and moved Governor Cuomo’s controversial teacher evaluation proposal forward. State teachers’ unions responded by calling for parents to opt-out of standardized tests, hoping that a lack of data would sabotage the system. In response, the Brookings Institution’s Matthew Chingos has published an analysis of whether opting out will actually affect teacher evaluations. The short answer is “no,” and here’s why:
To conduct his analysis, Chingos examined statewide data from North Carolina—specifically, the math achievement of fourth and fifth graders during the 2009–10 school year. Chingos ran two simulations of the data: one that investigated a random group of students opting out of state exams, and another that investigated a group of the highest-performing students opting out. Both simulations found that the effect of opt-outs on a teacher’s evaluation score is small unless a large number of her students choose to opt out.
So what happens if a large number of students in New York opt out?[1] As the number of students opting out increases, so too does the volatility of a teacher’s score. When scores are calculated with a smaller number of students, the value-added system becomes less reliable and therefore less fair. When a majority of students opt out—whether they are a random group or a cluster of the highest performers—the likelihood of a teacher receiving an incorrect rating increases. In other words, opt-outs make teachers more likely to end up in the highest distribution (labeled highly effective) or the lowest distribution (labeled ineffective) than they would be if no student opted out.
Of course, a higher likelihood of ending up in the highest distribution doesn’t sound so bad, particularly since New York requires good evaluation scores in order to be eligible for tenure. But poor evaluation scores can lead to tenure termination. Thus, an increase in opt-outs also means an increase in the chance that teachers could be labeled as ineffective—even if they’re not. This is markedly unfair, and Chingos points out that it’s also ironic: Not only does convincing more students to opt out potentially increase the number of teachers who are labeled ineffective, it also makes a system that teachers already decry as unfair even more imbalanced. If teachers are fired or their tenure is terminated based on an evaluation system they can prove is unjust and volatile, they could file a lawsuit—and probably win.
None of this matters, of course, if only a small group of students opts out. Value-added scores don’t change significantly until a large number of students opt out. Some have argued that opting out is "more noise than signal," an effort that has been well publicized but remains successful only in small pockets (and in those places, it doesn't do kids any favors). Others point out that the number of students opting out is increasing. Time will tell how much of an impact opt-outs will make, but for now—at least in Chingos’s view—it doesn’t look like they’ll affect teacher evaluations.
[1] There is a technical difference in Chingos’ analysis and how Ohio does value-added calculations. While Chingos states that he didn’t account for the margin of error in his value-added estimates (see note 3), Ohio’s ratings do account for the error. As a result, Ohio may very well see the reverse effect of opt-outs—more teachers rated in the middle rating categories—due to inflated margin of error (the “noise” in the value-added estimates).