Regarding the April 17th story, "Failing schools usually are," the research is consistent and clear: there is a low statistical correlation between the performance of schools measured by point-in-time, year-end test scores (as used in NCLB to measure "Adequate Yearly Progress") and those measured by how much the students grew in knowledge during a school year. I have summarized the results of this research in two places. One is the publication Unfinished Business: More Measured Approaches in Standards-Based Reform, published by the ETS Policy Information Center, and the other is "Failing" or "Succeeding" Schools: How Can We Tell?, published, but not endorsed, by the American Federation of Teachers. Paul Peterson also has written excellently about this.
Operational programs measuring "value added," however, have the serious flaw of not measuring what happens from Fall to Spring, but from the end of one school year to the end of the next one. They therefore measure what happens over the summer, when some students gain knowledge, some lose it, and others stay the same. The research is also clear on this. The answer, I think, is to give two forms of the same test, one in the Fall and one in the Spring, to measure gains that occur during the actual school year. The NWEA work is one large-scale example of this.
Paul E. Barton
Senior Associate
ETS Policy Information Center