Having worked on educator evaluation reform at a state department of education, I do my best to keep up with developments related to the extremely tough work of state-level implementation. I follow New Jersey’s progress especially closely because I took part in the work there (and I’m certainly biased in its favor).
If you also track such stuff, take a look at the “2013-14 Preliminary Implementation Report on Teacher Evaluation" recently released by the NJDOE.
There’s much to like here, including the way the state reports on the history of the program and its focus on district engagement and continuous improvement.
But two things really caught my eye. First, the report has some important data points. For instance:
- The pilot program included thirty districts and nearly 300 administrators.
- More than 25,000 educators took part in some kind of state training in 2013–14.
- The new program may have increased the number of teacher observations around the state by 180,000(!).
- More than half of districts are using some version of the Danielson observation instrument, and most of the remaining districts are using one of four other tools.
Second, the state is betting on “student growth objectives” (SGOs) and putting significant energy into implementing them well.
The state held forty-four SGO workshops from late 2013 through early 2014, then held another thirty-nine “SGO 2.0” sessions this spring, then added more this summer and fall because of demand. According to a state survey, teachers are reporting that SGOs can be beneficial to their practice.
Things aren’t perfect by any means. According to the state’s review, only 70 percent of SGOs were deemed to be “high-quality,” “specific,” and “measurable.” Most of the other 30 percent were found to lack specificity. There was also inconsistency in how teachers connected SGOs to specific standards.
But there were also some fascinating and encouraging findings. Nearly all SGOs included measurable baseline data (yes, as the “G” would imply), and 89 percent utilized a pre-test to assess initial skills and/or knowledge. Four percent of SGOs actually used four or more baseline data points.
This paragraph was so interesting, I thought I’d just reproduce it verbatim (from page 11).
Educators commonly used pre-existing assessments for SGO purposes rather than adding new assessments. 63% of educators in the sample used district or department-created common assessments or commercial assessments like the DRA2 or MAP to measure progress, thereby increasing the comparability of SGOs between teachers. A large majority (88%) of surveyed districts reported that at least half of their teachers used common assessments. Over half (54%) of districts in the survey reported that the summative SGO assessments were embedded into the typical testing schedule and did not increase the overall number of assessments given to students.
This is just a smidgen of the good stuff in the report. I encourage you to take a gander and ask your favorite states working on evaluation reform if they have something similar.
Putting my cheerleading aside for a moment, I need to add important caveats. Some educators in this state (and others) really don’t like what’s happening. There’s frustration in some corners that the state is involved in these issues. I have confidence that this agency (like every agency I’ve ever come across) left out of its report some unpleasant details about its implementation of this work. The state didn’t include in the report data from empirical measures of student growth (SGPs in this case), though they say that it is coming. And most importantly, we don’t know how any of this is yet influencing student learning.
With all that said, I’m glad NJDOE produced this status check, and I’m hopeful about its contents. New Jersey’s teachers and administrators deserve a whole lot of credit, as do Pete Shulman and his team at the state’s department of education.
Though this document is about implementation, it’s important to note that there’d be no state program to implement were it not for the legislation produced by Governor Christie, former Commissioner Chris Cerf, and legislative leaders and supported, not unimportantly, by the NJEA.
And while some of my friends on the right won’t like this, and I may break out in hives myself for mentioning it, I’d be disingenuous if I didn’t acknowledge that educator evaluation reform wouldn’t be where it is today were it not for Race to the Top.