Educators have rightfully complained for ages that the professional development (PD) that they typically receive in school districts is next to useless. And countless studies have shown that most forms of PD fail to help teachers improve.
A new meta-analysis by Brown University’s Matthew Kraft and colleagues tries to separate the empirical wheat from the chaff by examining the causal evidence (only) for one PD model that is very popular in schools: teacher coaching. They define coaching programs broadly as in-service PD where coaches or peers observe teachers’ instruction and provide feedback to help them improve. More specifically, coaching is an instructional expert working with teachers to discuss classroom practice in a way that is (a) individualized—coaching sessions are one-on-one; (b) intensive—coaches and teachers interact at least every two weeks; (c) sustained—teachers receive coaching over an extended period of time; (d) context specific—teachers are coached on their practices within the context of their own classroom; and (e) focused—coaches work with teachers to engage in deliberate practice of specific skills. They exclude teacher preparation and school-based teacher induction programs.
They identify sixty studies of teacher coaching programs, including fifty-five in the United States and five in Chile and Canada. Each had a causal research design (mostly randomized controlled trials), as well as examined the effects of coaching on both instruction and achievement. All of the included studies were published during or before 2017, met the coaching definition summarized above, and took place in early childhood through grade twelve settings.
Data sources included a range of standardized test scores, data from classroom observation instruments that captured teachers’ pedagogical practices, and measures of teacher-student interactions, student engagement, and classroom climate. The meta-analysis leveraged increased statistical power by pooling results across multiple studies.
In a nutshell, analysts find pooled effect sizes of 0.49 standard deviations (SD) on instruction and 0.18 SD on achievement. They find no statistically significant effects on student achievement for general coaching programs, but do find them for content-specific programs (0.20 SD), positing that the former are often focused less on helping teachers improve test scores. Further, coaching was equally effective at the elementary, middle, and high school levels. Surprisingly, they did not find significant differences in effect sizes for coaching programs that were delivered in-person versus virtually, although the latter data are less reliable. They also fail to find evidence that coaching must be “high dosage” to be effective. Finally, their results show that the average effects from larger programs are only a fraction of those found in smaller programs. Specifically, in comparing programs with fewer than one hundred teachers to those that exceed that number, they find diminishing effects as programs are taken to the larger scale.
Thankfully, some teachers have not completely soured on the power of professional development, done right, to help them improve. And we’ve heard for years that PD can be much better than it’s traditionally been, as long as it is content-based, collaborative, sustained, supportive, and so on. But those are simple words that are hard to put into effective practice. That’s why the report’s major recommendation—based in both rigorous science and common sense—is so helpful: “It may be that coaching is best utilized as a targeted program with a small corps of expert coaches working with willing participants and committed schools rather than as a district-wide PD program.” Yes, yes indeed—this former teacher agrees. That very well may be.
SOURCE: Matthew A. Kraft et al., “The Effect of Teacher Coaching on Instruction and Achievement: A Meta-Analysis of the Causal Evidence,” Review of Educational Research (August 2018).