Thanks in part to the Common Core, there is broad (though not yet universal) agreement that we need to raise the level of rigor in the reading that’s assigned to all students. Unfortunately, the guidance that’s starting to emerge about how teachers can best select “grade-appropriate” texts is overly complicated and may actually end up undermining the Common Core’s emphasis on improving the quality and rigor of the texts students are reading.
Take, for example, the book recently released by the International Reading Association entitled Text Complexity: Raising Rigor in Reading. The first chapter of the book (blogged here), made a strong argument against the practice of assigning “just right” books and in favor of selecting more rigorous texts.
Having made a persuasive case for upping the rigor of readings, the authors devote the better part of the remaining eighty pages to showing, in great detail, just how complicated this process can become when put into practice. What unfolds is a dizzying array of quantitative and qualitative measures that teachers can use to select appropriate texts.
The authors warn teachers that relying on quantitative measures alone (word and sentence length, word frequency, and text cohesion), which are by far the easiest and perhaps even the most reliable way to pin a text to a particular grade band, is “too problematic to be effective.”
Of course, the authors are right that quantitative measures alone can’t give you a complete picture of a text’s complexity. Poetry, for example, is notoriously hard to pin to a particular grade. And even children’s literature has shades of meaning that, depending on the purpose, could make it appropriate to read in an advanced philosophy course. And quantitative measures become increasingly unreliable at the high school level for literature.
That said, their very thorough explanation of the remaining two dimensions of text complexity—qualitative factors and “reader and task considerations”—turns text selection from a fairly straightforward practice to a time-consuming exercise that seems fraught with error.
The book explains those factors as follows:
- Qualitative dimensions: these refer to “those aspects of text complexity best measured or only measurable by an attentive human reader, such as levels of meaning or purpose; structure; language conventionality and clarity; and knowledge demands.”
- Reader and task considerations: “While the prior two elements of the model focus on the inherent complexity of the text,” the authors explain, “variables specific to particular readers (such as motivation, knowledge, and experiences) and to particular tasks (such as purpose and the complexity of the task assigned and the questions posed) must also be considered when determining whether a text is appropriate for a given student.”
That seems relatively straightforward until you realize that the “qualitative measures of text complexity rubric” has thirteen indicators that teachers can use to score a text—from its “density and complexity” to its purpose, genre, organization, text features and graphics, and on. Included are four indicators that address the “knowledge demands” of the text. The book also encourages teachers to examine the “levels of meaning” in the text, since obviously there are many books that can be read at different levels.
Yet, is such a complex rubric not overcomplicating the process of text selection? And, is it any more reliable than simply asking teachers to use a quantitative measure coupled with some level of common sense?
Do the texts we’re asking students to read have important cultural and literary significance?
In fairness, the authors do note that teachers should rely on quantitative measures and use these qualitative measures primarily to drive planning and instruction. But that note is very much an aside, and their conclusion very clearly directs teachers not to rely on quantitative measures—a message that, I fear, too many will take very much to heart. And I worry that the focus on evaluating texts on each of these qualitative measures leaves out what is perhaps the most critical question: Do the texts we’re asking students to read have important cultural and literary significance?
For instance, given the choice between young adult fiction that is considered “appropriate” for eighth or ninth grade and Little Women, wouldn’t it be better to encourage teachers to teach Little Women than to use “qualitative dimensions” and “reader and task considerations” to justify the book with less literary significance?
This is precisely what Diana Senechal wisely suggested in a comment on this blog a few months ago. She warned:
We get so hung up on measuring exactly what students are getting out of their reading that we forget the importance of being in a little over one's head, being surrounded with language, allusions, and ideas that go beyond the cozy and familiar, and NOT having it all made clear.
But, to get there, she suggested a different, simpler approach than the one outlined in Raising Rigor in Reading:
I'd like to suggest something different from the "just right" and the "grade appropriate" approaches: an "excellent works" approach. Of course, the works selected should not be inappropriate for the grade. But no work of literature is "grade appropriate," strictly speaking; if it's worth its salt, it can be read at many different levels.
Some will say that Romeo and Juliet is not appropriate for eighth grade—for the simple reason that students won't understand all of it. But you have to give students the experience of reading things they don't fully understand at first. Otherwise you will have to exclude a great deal of literature.
It’s possible that a “great books” approach to text selection would yield far better results because it might steer teachers clear of teaching young adult literature, even when it’s technically grade appropriate, in favor of higher-quality literature and literary nonfiction. And, in the process, teachers save some time that could go into prepping lessons up to the challenge of bringing these works to life.