Critics of “bubble tests” rejoice! The campaign against the use of multiple choice questions in state tests may finally be turning the tide. But, on the eve of this victory, it’s worth pausing to ask: is this actually a good thing for those of us who care about smart, efficient, and effective accountability systems?
Details continue to trickle in about the PARCC and SMARTER Balanced assessment consortia plans for their summative ELA and math assessments. Catherine Gewertz has dug into the RFPs for both consortia and shared some of her findings in an article published in Education Week yesterday. There’s a lot of interesting information, including the fact that both consortia appear to be moving away from multiple choice questions in their test designs. Gewertz explains:
Documents issued by the two groups of states that are designing the tests show that they seek to harness the power of computers in new ways and assess skills that multiple-choice tests cannot…
While the plans offer few details about how the new items will differ, or why it’s necessary to abandon multiple choice questions entirely, people across the education world will no doubt celebrate the demise of the multiple choice question.
Multiple choice items are, after all, the assessment items everyone loves to hate. Critics on all sides of the education debate deride “bubble tests” as the enemy of genuine learning and believe that our reliance on assessments that use multiple choice questions has forced teachers to “teach to the test” rather than focusing on helping students achieve deep conceptual understanding of critical content and 21st century skills.
But, perhaps we shouldn’t be so quick to relegate multiple choice questions to the dustbin of assessment history? After all, when carefully crafted, these questions can be useful, reliable, and cost-effective ways to gather information about student learning. And because they can be scored quickly, information from multiple choice questions can be used almost immediately to drive whole class and small group instruction and individual tutoring.
Unfortunately, “bubble tests” have become the scapegoat for everything that’s wrong with assessments today. In particular, people tend to criticize two things.
First, some multiple choice questions are just poorly written. Too many questions assess only low-level content that requires little more than rote memorization of basic skills, rather than higher-level application or conceptual understanding.
Second, analysis of the data from multiple choice questions too often begins and ends with whether the student got the question right or wrong. But, such superficial analysis ignores the most useful information that can be gleaned from multiple choice questions. Specifically, careful analysis of the “distractors”—the purposefully chosen wrong answers—can help the teacher understand where student understanding is breaking down.
Take, for example, the following 10th grade math question:
What is the median of the data set below?
30, 37, 19, 42, 33, 37
A. 31 C. 35
B. 33 D. 37
This is a basic question that assesses student mastery of core math skills. But, analysis of the distracters can help teachers identify where student understanding is breaking down. For example, a student selecting answer B has most likely confused mean with median—information that a teacher can use to target individual tutoring or instruction right away. But, more than that, would an open-ended question give teachers more or better information about student mastery of this basic skill? Not necessarily.
Of course, it’s also possible to write questions that assess far more than basic skills. Carefully crafted multiple choice questions can demand application of essential content and skills and can push student thinking. And the data can be equally useful in driving instruction and tutoring.
What’s more, multiple choice questions are generally more efficient than open-ended questions. Scoring them is quick, easy, and cost-effective. And there is very little scoring bias: when properly constructed there are clear right and wrong answers to each question. (Open-ended questions, by contrast, can be scored differently by different people, which often leads to either variations in student scores, or an overreliance on simplistic rubrics that do not give the full picture of student understanding of essential content and skills.)
Of course, with multiple choice items, like all assessment items, their effectiveness depends on how well they are developed and how effectively they are put to use as part of an overall assessment and instructional strategy. And, while assessments should never rely exclusively on multiple choice questions, to avoid them entirely because they may have been abused in the past seems misguided.
So, as PARCC and SMARTER Balanced look to develop the assessments of the future, perhaps we shouldn’t be so quick to abandon something that, when paired with innovative new question types, might be the most effective and efficient way to gauge student learning of essential content and skills.