In the world of standards-based and data-driven instruction, knowing precisely how the Common Core will be assessed is critical. After all, while standards help explain what students should know and be able to do, it’s the assessments that clarify how student mastery will be measured. And that information is critical to ensuring that what is taught in the classroom matches—in terms of both content and rigor—what is articulated in the standards and measured by the assessments.
Knowing precisely how the Common Core will be assessed is critical.
Yet, both federally funded assessment consortia have only given glimpses of how they plan to measure student mastery of the Common Core—which of course makes the information communicated and sample items shared by the consortia all the more critical to classroom-level Common Core implementation efforts.
Most recently, the Smarter Balanced Assessment Consortium released a small handful of web-based English language arts and math sample test items, which are available for public comment and feedback until November 2. While useful for painting a picture of how a few standards will be assessed and how technology will be used, the quality and rigor of the questions themselves are a mixed bag. While some help demonstrate just how different instruction aligned to a standard needs to be to meet the content and rigor demands of the CCSS, others seem poorly constructed, or misaligned to the demands of the new standards.
The Good
To begin, several questions are quite strong and very clearly demonstrate how SBAC will focus reading assessments on one of the most critical elements of the CCSS—pushing students to use evidence drawn directly from the text to support conclusions and analyses. Item 43001, for instance, tells the students that “Naomi is worried and has done something wrong,” and asks them to highlight three sentences from the text that support the conclusion. In the past, related questions might ask students to draw that conclusion—something they might arrive at without fully understanding or analyzing the text—but would rarely push them to defend the conclusion using text-based evidence. This question goes well beyond what we’ve seen in the past and really pushes students demonstrate their knowledge and understanding of the text. What’s more, the question shows teachers how technology will be maximized to score questions like this quickly, and in a way that would make it difficult for a student with limited understanding of the text, or the skill being assessed, to earn full credit.
Similarly, item 43008, is an effective constructed-response item that would be difficult to answer without close reading and deep understanding of the passage itself. (That said, a more specific scoring rubric would add significant value and would make it far clearer what students need to know and be able to do to demonstrate mastery.)
For item 43016, students are asked to read a paragraph, identify sentences that should be removed, and explain why each is superfluous. This is an excellent, higher-level question, and the related scoring rubric is clear, specific, and leaves little room for confusion or interpretation.
The Bad
Unfortunately, some questions miss the mark. Item 43600, for instance, is an open-ended question that would work better as a multiple choice. The item is a simple “right there” question that asks what the main character learned about her grandmother. The answer is simple, straightforward, and doesn’t require students to draw conclusions or make inferences. Yet students are asked to “use details from the text to support the answer.” Unfortunately, it’s hard to see how supplying details would add anything, except possibly confusion. A multiple-choice item—or an item where students were asked to highlight the area in the text where they would find the answer—would focus the question more clearly and provide just as much (if not more) useful information.
Item 43007 is perhaps more troubling because it seems poorly written. It asks students what they learned about diamonds from the passage, but the answer relates not to what we learned about diamonds, but rather about the formation of galaxies. Rather than asking students to dive deeper and analyze the text, this question seems to serve only to confuse.
Finally, in item 43599, students are asked to edit a text. Unfortunately, as part of the editing process, they are asked to retype the entire paragraph, noting the changes as they type. This is as much an exercise in transcription as in editing, and it seems like we would have done better to simply ask students to add their edits directly to the paragraph itself.
The Ugly
Perhaps the most disappointing aspect of the released items relates to how the CCSS writing standards are assessed. Two of the most significant shifts in the CCSS ELA standards are their focus on writing to texts and on shifting from narrative to persuasive and analytic writing through the grades. Unfortunately, neither of these shifts is well represented in the sample items.
Perhaps the most disappointing aspect of the released items relates to how the CCSS writing standards are assessed.
For starters, the prompts don’t seem much different from those that have dominated state assessments for years. Consequently, they don’t give much information about how different the CCSS expectations are, or how student mastery will be measured differently. The first prompt, aimed at fourth graders (43009), is a typical narrative-writing prompt, asking students only to complete a story. And the related scoring rubric is generic, asking only broadly for “supporting details,” “appropriate word choice,” and on.
The additional extended-response items (43010 and 43019) seem designed to focus on taking a position and supporting it with evidence from text, but they seem poorly designed for that purpose. The first (43010) asks little more than the students’ personal opinion on why the school day should be lengthened, and it’s hard to imagine students gleaning much useful evidence or detail from the “school schedule” they’re asked to read and refer to in their answers. Furthermore, the rubric is so general that it is nearly meaningless.
The final extended response item (43019), aimed at sixth graders, goes a bit further than the fourth-grade example, but not much. It gives students eleven bullet points to read (seven that are arguments in favor of allowing cell phones in school, four that oppose allowing cell phones) and then asks students:
Based on what you read in the text, do you think cell phones should be allowed in schools? Using the lists provided in the text, write a paragraph arguing why your position is more reasonable than the opposing position.
Again, no real “sources” are given, and students are not really asked to cite authentic evidence from text to support their position, a key element of the CCSS writing standards.
Teachers around the country have already begun the daunting task of aligning their work to the content and rigor of the Common Core. If we’re going to hold them and their students accountable for mastery of these standards, it’s time to get much more serious about giving them the assessment guidance they need to inform that work and keep it on track.