Amid all of the hullabaloo over teacher evaluations, fewer states are now using test scores to assess the quality of their teacher workforce. Thankfully, intrepid researchers Tom Dee, Jessalynn James, and James Wyckoff ask a key question before tossing out the teacher-measurement baby with the bathwater: How has the District of Columbia’s evaluation system, IMPACT, evolved in recent years, and has this evolution continued to strengthen the teacher workforce there?
Dee and colleagues have been tracking the impact of IMPACT not long after it began in 2009. This time they ask whether the key changes in “IMPACT 3.0” have been beneficial, which include reducing the percentage of the final rating attributed to individual value added from 50 to 35 percent; eliminating school-level value added; and allowing teachers in tested grades, just like those in non-tested grades, to choose a “teacher selected assessment” to comprise part of their rating—as opposed to only using PARCC scores (which is D.C.’s “state” test). IMPACT 3.0 also made use of new performance-based career ladders that helped determine base pay increases and initiated incentives to teach in the forty most demanding schools in the district.
Key to this study, IMPACT 3.0 also introduced higher performance standards for lower-performing teachers. Specifically, it included a new performance category called “Developing” that was added to the existing ratings by dividing the Effective category in half, with the lower portion comprising the new category. (Evidence showed the prior Effective range reflected considerable variability.) As always, teachers with one or two consecutive Minimally Effective ratings were to be let go. But in IMPACT 3.0, teachers with three consecutive Developing ratings would also be let go. So analysts created two data sets—one for teachers at the Minimally Effective/Developing threshold and another at the Developing/Effective threshold—and conducted an intent-to-treat regression continuity design to analyze impacts at below and above those thresholds. The full study sample included over 17,000 teacher-by-year observations of teachers who received IMPACT ratings between 2010–11 and 2014–15.
On the descriptive front, they find that under IMPACT 3.0, nearly 20 percent of all DCPS teachers leave each year and 44 percent leave over three years; but attrition among Effective (15 percent each year) and Highly Effective (10 percent each year) teachers is much lower. Among Developing, Minimally Effective, and Ineffective educators, one-year attrition is 26, 53, and 91 percent, respectively.
The key empirical finding is that teachers just below the Minimally Effective threshold are approximately 11 percentage points less likely to return the following year, an increase in attrition of approximately 40 percent, which suggests that IMPACT 3.0 is effective in inducing low-performing teachers to voluntarily exit. Teachers just below the Developing threshold who have two more years to earn an Effective rating or higher are 5 percentage points less likely to return the following year.
Recall that the Developing teachers include those band of teachers who were previously considered Effective, so the next question becomes, did they really develop? Indeed they did, as analysts found that, among those who remained in DCPS, more than two-thirds (68 percent) of Developing teachers improved to Effective or Highly Effective two years later.
Teachers receive not only multiple observations but formal feedback and coaching following each one, as DCPS takes its responsibility to develop teachers, particularly low-performing ones, quite seriously.
Still, we know that the nation’s capital is unique in the various assets that it enjoys to make IMPACT work, such as sky-high spending and a deep pool of local talent. But it is encouraging that leadership continues to tweak the system in response to both teacher feedback and the results and challenges that IMPACT has experienced in the last decade. These continuous, thoughtful changes to the system have thus far resulted in sustained improvements in teacher effectiveness in the city. And that’s a very good thing for the kids who live there.
SOURCE: Thomas Dee, Jessalynn James, and Jim Wyckoff, “Is Effective Teacher Evaluation Sustainable? Evidence from DCPS,” Education Finance and Policy (November 26, 2019).