Steven Glazerman, Sarah Dolfin, Martha Bleeker, Amy Johnson, Eric Isenberg, Julieta Lugo-Gil, Mary Grider, and Edward Britton
Mathematica Policy Research
October 2008
This report provides the first-year findings of a 5-year "gold standard" study sponsored by the Institute of Education Sciences that compares "comprehensive" or high-intensity teacher induction programs to the (presumably) less comprehensive "business as usual" variety. Since virtually every school district in America offers some kind of new-teacher induction program, with some costing up to $6,600 per head, this study is relevant and timely. Seventeen school districts (serving primarily low-income students) in 13 states were randomly assigned either to a treatment group (that participated in a new comprehensive teacher program provided by Mathematica*) or a control group (that participated in the district's standard teacher induction program, although little explanation was provided as to what "business as usual" really looked like in each district). Not surprisingly, first-year findings show that teachers participating in the "fully loaded" model received significantly more mentoring, guidance on instruction, and time in certain professional development activities (e.g., observing other teachers) than did control group teachers. What was surprising, however, was the neutral or even negative effect of such stepped-up induction interventions on student performance. In fact, there were no across-the-board positive impacts on student test scores in grades 2-6--and larger interventions tended to lower math scores in grades 2 and 3. Neither were there any statistically significant differences between treatment and control teachers' instructional practices (based on single classroom observations) nor in mobility rates between the two groups (i.e., the percentage of teachers returning to the district for a second year after the 1-year intervention). These initial findings don't bode well for comprehensive programs, but it's important to note how exceedingly difficult it is to detect impacts in rigorous evaluations like this one, particularly after only one year. What's even more regrettable, though, is that only a subsample of districts remain "able and willing" (whatever that means) to participate in a second year of the intervention. While it's well known that gold standard evaluations carry a hefty price tag, funding a 5-year $17.6 million evaluation of what will amount to a 1-year intervention for most of the original sample is excessive by any measure. As others have said before, well-funded federal evaluations can still fall short. You can read the report here.
*The comprehensive programs were not provided by Mathematica per se (that would be a significant conflict of interest). Rather, Mathematica convened a panel of experts to review proposals from comprehensive induction providers. Induction programs developed by the Educational Testing Service and the New Teacher Center at UC-Santa Cruz were ultimately selected to participate in the study.