Jay P. Greene, Marcus Winters, Greg Forster, The Manhattan Institute for Public Policy
February 11, 2003
In this new Manhattan Institute study, Jay Greene and two associates conclude that high-stakes tests are accurate measures of student proficiency whose reliability is not undermined by "teaching to the test" or other strategies intended to inflate or manipulate the scores. Their method is to compare results from high and low stakes tests in nine school systems (two states, seven districts), the point being that nobody has an incentive to fudge low-stakes test scores and, therefore, if low- and high-stakes instruments yield similar results, we need not worry overmuch about the accuracy of the high-stakes kind. Their closest results come from Florida, where they found "a 0.96 correlation between high and low stakes test score levels, and a 0.71 correlation between the year-to-year gains on high and low stakes tests." Across all nine sites, they found a robust average correlation of 0.88 between achievement levels shown on the low and high stakes tests and a weaker but decent (0.45) correlation between yearly score changes on the two kinds of instruments. These findings contradict recent claims by David Berliner and Audrey Amrein that high-stakes test results are distorted. [See http://www.edexcellence.net/gadfly/issue.cfm?issue=63#924 and http://www.edexcellence.net/gadfly/issue.cfm?issue=8#372 for more on the Berliner and Amrein reports.] You can find the new Manhattan Institute study at http://www.manhattan-institute.org/cr_33.pdf.