Successful Common Core implementation will hinge on a number of factors. Among the largest of these will be getting the assessments right—in terms of both design and cost. Central to these issues are the controversial multiple-choice “bubble” tests, which are welcomed by some as fast and efficient means of gauging student knowledge and skills and derided by others as the cause for “teaching to the test” and superficial knowledge. This recent report found within the Journal of Psychological Science finds merit in the bubble test—if designed well. It explains findings from two small-sample studies (one had thirty-two participants, conducted out of UCLA, the other ninety-six, conducted out of Washington U.). The upshot: Both found that properly structured multiple-choice tests (those which offer plausible wrong answers alongside the correct response) “trigger the retrieval processes that foster test-induced learning and deter test-induced forgetting.” In other words, bubble tests with competitive responses trigger actual knowledge-retrieval processes rather than simple recognition processes—and do so better than cued-recall (fill-in-the-blank) tests. The bottom line is both cautiously encouraging. Multiple-choice tests—done correctly—can be a useful tool in an assessor’s kit (a point that we have previously argued). The CCSS assessment consortia would be wise to keep that in mind.
SOURCE: Genna Angello, Elizabeth Bjork, Robert Bjork, and Jeri Little, “Multiple-Choice Tests Exonerated, At Least Some of the Charges: Forgetting Test-Induced Learning and Avoiding Test-Induced Forgetting,” Psychological Science 23, no. 11 (October 2012): 1337-44.