The biography of teacher evaluation’s time in federal policy might be titled Portentous, Polarizing, and Passing. It had gigantic ripple effects in the states—man, did it cause fights—and, with its all-but-certain termination via ESEA reauthorization, it stayed with us ever so briefly.
Some advocates are demoralized, worried that progress will at best stall and at worst be rolled back. Though I’m a little down that we’re unlikely to see many more states reform educator evaluation systems in the years ahead, I think the feds’ exit makes sense.
This has nothing to do with my general antipathy for this administration or my belief that its Department of Education deserves to have its meddling hands rapped. And while I think Tenth Amendment challenges are justified, I have a different primary motivation.
In short, I think the work of teaching is so extraordinarily complex and teachers are so tightly woven into the fabric of school communities that any attempt by faraway federal officials to tinker with evaluation systems is a fool’s errand. I think we may eventually come to view the Race-to-the-Top and ESEA-flexibility requirements related to assessing teachers as the apotheosis of federal K–12 technocracy.
If you’ve never dug into the details of evaluation-reform implementation, you're probably thinking I’m exaggerating. Just bear with me for the next five hundred words. I think you’ll quickly appreciate just how daunting this work is and, as a consequence, how poorly federal diktats fit the bill.
I had a hand in New Jersey’s early-stage implementation, so I follow its developments pretty closely. Now, I’m by no means objective, and I’m not arguing that it perfectly reflects the entire field. But its work is instructive. Here’s just a snapshot of what it’s going through.
First, check out the website. You’ll find information on the state’s growth model (“student growth percentiles”), student learning objectives (SLOs), how the new evaluation system influences tenure, how summary ratings inform professional development, and more. Also take a look at the timeline showing five years of work.
Then learn about some of the complications that need to be worked out when trying to measure academic growth. For example, how much of a school year must an educator be teaching a grade 4–8 reading or math class for an SGP to be associated with his work? How many students must he have taught? What percentage of the school year must each student spend in a teacher’s class before he becomes the “teacher of record”? How do you calculate a growth score for an educator teaching both reading and math classes?
Then consider how an SGP score (a scale of 1–99) is converted to a 1–4 scale, since the state uses a four-level summative teacher-rating system. To figure out just this small component of the new evaluation system, the state, among other things, analyzed data from its multi-year pilot program, studied academic research, consulted with other states, and conferred with its own “technical advisory committee” and other outside experts.
Then there’s the whole question of what percentage of the final summative rating should be tied to SGPs—50 percent, 30 percent, 10 percent? This is in flux, at least partially, due to the state’s transition to new assessments.
Given that growth scores have been on the receiving end of lots of criticism (fair and otherwise), the state has produced a fact sheet to “Alleviate Miscommunication Surrounding mSGPs.” If you want more information, you can read the official state memo covering everything from the exemption of SGP scores from the state’s Open Public Records Act and the process for administrators to access data through a centralized information-management system, to corrective action plans for low-rated educators and legal requirements for tenure cases, to the score certification process and accessing official course roster data. (The appendix has user guides, a score-conversion chart, a roster verification guide, a methodology video, and more).
And we haven’t even talked about the other components of the evaluation. As the state notes of its system, “A central tenet of AchieveNJ is that educators are never evaluated on a single factor or test score alone, but on multiple measures of both effective practice and student learning.” There are a whole host of issues associated with observations (e.g., the number of times a tenured or untenured teacher must be observed per year, who can do the observing, which rubrics are allowed).
Once you fully understand SGPs, SLOs, and observation scores, then you’re ready for the summative rating calculator, which combines these components. Any questions? This thirteen-minute video might help explain the system.
I’ve used up my five hundred words and your patience. But I hope you see my point.
Does this seem like something Uncle Sam should be monkeying around with?