In most states, only math and reading teachers in grades 4–8 receive evaluations based on value-added test results. For all other teachers, it’s on to Plan B. To evaluate these teachers, many districts are using alternative measures of student growth, which include vendor assessments (commercial, non-state exams) and student learning objectives (SLOs, or teacher-designed goals for learning). But how are these alternative measures being administered? What are their pros and cons? The research on this issue is terribly thin, but a new study from the Institute of Education Sciences casts an intriguing ray of light. Through in-depth interviews, the researchers elicited information on how eight mid-Atlantic districts (unnamed) are implementing alternative measures.
Here are the study’s four key takeaways: First, educators considered vendor assessments (with results analyzed through a form of value-added modelling) to be a fairer and more rigorous evaluation method than SLOs. Second, both alternative measures yielded greater variation in teacher performance than observational methods alone. Third, implementing SLOs in a consistent and rigorous manner was extremely difficult. In fact, the authors write, “All types of stakeholders expressed concern about the potential for some teachers to ‘game the system’ by setting easily attainable goals.” Fourth, the implementation of these alternative measures took a great amount of time and came at a financial cost. The costs related to time should be of particular concern to states wrestling with worries about over-testing.
So do the benefits outweigh the disadvantages? The authors don’t render judgment, but they raise enough concerns to make this reader think that alternative measures (especially SLOs) may not be worth all the effort.
SOURCE: Moira McCullough et al., Alternative Student Growth Measures for Teacher Evaluation: Implementation Experiences of Early-Adopting Districts (Washington, D.C.: Institute of Education Science, July 2015).