Last month, Fordham released a detailed review of Florida’s latest K–12 academic standards for English language arts (ELA) and mathematics. In it, ELA reviewers Timothy Shanahan and Douglas Fisher and math reviewers Solomon Friedberg, Roger Howe, and Francis (Skip) Fennell cited several strengths of the Florida standards but also specific recommendations for their improvement. Ultimately, the review teams, working independently, determined that both subjects deserved a score of 6/10, which translated into a rating of “Weak” on their scoring metric, meaning that “significant and immediate revisions are recommended.”
Last Monday, the Independent Institute released its own review of Florida’s new standards, with a foreword penned by long-time Common Core opponent Ze’ev Wurman. The English language arts standards were evaluated by David Steiner and Ashley Berner (leaders of the Johns Hopkins Institute for Education Policy) and the mathematics standards by James Milgram (professor emeritus of mathematics at Stanford University and also a long-time critic of the Common Core). In a nutshell, the Independent Institute came to differing conclusions about the merits of the standards, viewing them much more positively than did our review.
Fordham has been publishing reviews of state standards for almost twenty-five years. In that time, many have accused us of being “tough graders.” (Perhaps they’re unaware of the benefits of rigorous grading practices!) Our expert reviewers have always developed upfront the comprehensive criteria by which they then assess the content, rigor, clarity and specificity of state standards. They do that before they lay eyes on the first set of standards, in part to hold themselves accountable to a rigorous external benchmark. That means they don’t grade on a curve relative to the quality of standards in other states. Equally important, their adherence to a well-crafted evaluation rubric ensures that standards do not get unfairly penalized or rewarded should individual reviewers be inclined to slash or lavish points arising from their respective pet peeves or sacred cows. Unfortunately, we detect this inclination in the Independent Institute’s report, which does not specify any criteria used by the reviewers to judge the quality of Florida’s standards.
That said, it’s not our intent to lambaste their review, both because we find some points of mutual agreement and because we consider the reviewers colleagues in our shared quest to improve public education. Still, lest our readers fall prey to confusion arising from dueling standards reviews, our ELA and math experts crystallize key differences below.
English language arts
1. We agree with Steiner and Berner that building knowledge is an essential aim of ELA standards. That’s why we were so critical of Florida’s failure to provide sufficient guidance for how the list of texts included with its standards should be used, especially with respect to writing.
As indicated in our original review:
One innovation in the Florida standards is the inclusion of a series of tables throughout that show that sample texts could or should be used to teach particular reading standards. The idea that these are also the ideal texts to serve as models for students’ writing work is encouraging, yet none of the tables reference any connections between the texts and the writing standards. (pg. 13)
To be clear, the Florida standards provide some links between books and standards, yet they don’t do it consistently or incisively. The research base indicates that “text sets” (collections of texts tightly focused on a specific topic) provide superior knowledge-building benefits. Yet the Florida standards emphasize a random organization of civics texts.
Once again from our review:
On the plus side, the standards require or suggest (the verbiage is unclear) the reading of K–12 sample lists of literary texts and texts with a civics emphasis—an innovative inclusion—but then fail to provide any kinds of learning goals that would enable students to engage successfully in the specialized demands of such reading. (pg. 11)
For these reasons, the Florida standards are not as predicated on building knowledge as the Independent Institute suggests.
2. Steiner and Berner state that Florida already includes disciplinary literacy standards—defined as the specialized ability to read history, science, or technical materials in appropriate and sophisticated ways—and that the Fordham reviewers missed them. In fact, the words “disciplinary literacy” do not appear in the standards at all. What Steiner and Berner cite are separate documents called “Literacy in the Content Areas Toolkits,” which are based on the Common Core State Standards (CCSS) that Florida replaced with its new standards. The website that houses the “toolkits” say nothing about their fate or status. Either way, if the Florida Department of Education wants disciplinary literacy standards, they should include them in their new standards. Period.
3. In terms of Florida’s listening standards, Steiner and Berner are correct that kindergartners are expected to learn to listen to others in a conversation. In addition, there are a few sentences of catch-all language on page 147, Appendix A (!) that declare that all students in grades K–12 “should engage in academic conversations” and what that means, but those are “instructional activities” not standards. The only measurement possibility based on that language is whether teachers used those activities in their classrooms. That’s obviously no substitute for a series of measurable listening standards that build upon one another across individual grade levels. Research has shown “a significant relationship between listening comprehension and proficient reading.” Florida’s approach makes this the weakest listening strand in the nation.
4. Steiner and Berner believe “a modest number of progressions should be reviewed for sequencing and redundant repetitions.” Clearly “modest” is in the eye of the beholder—as are the negative ramifications of such repetition on student learning. As it stands, repetitive learning progressions exist in at least seven areas: improving writing, researching and using information, multimedia, academic vocabulary, morphology, context, and connotation. For instance, see page 24 of Florida’s standards for the progressions for academic vocabulary, where minimal changes in wording from grades 6-12 to grades K-5 are intended to be self-explanatory to teachers.
ELA.12.V.1.1 Integrate academic vocabulary appropriate to grade level in speaking and writing.
ELA.11.V.1.1 Integrate academic vocabulary appropriate to grade level in speaking and writing.
ELA.10.V.1.1 Integrate academic vocabulary appropriate to grade level in speaking and writing.
ELA.9.V.1.1 Integrate academic vocabulary appropriate to grade level in speaking and writing.
ELA.8.V.1.1 Integrate academic vocabulary appropriate to grade level in speaking and writing.
ELA.7.V.1.1 Integrate academic vocabulary appropriate to grade level in speaking and writing.
ELA.6.V.1.1 Integrate academic vocabulary appropriate to grade level in speaking and writing.
ELA.5.V.1.1 Use grade-level academic vocabulary appropriately in speaking and writing.
ELA.4.V.1.1 Use grade-level academic vocabulary appropriately in speaking and writing.
ELA.3.V.1.1 Use grade-level academic vocabulary appropriately in speaking and writing.
ELA.2.V.1.1 Use grade-level academic vocabulary appropriately in speaking and writing.
ELA.1.V.1.1 Use grade-level academic vocabulary appropriately in speaking and writing.
ELA.K.V.1.1 Use grade-level academic vocabulary appropriately in speaking and writing
Our response is organized around three of our core objections to the (very brief) review conducted by James Milgram.
1. Milgram: “I find the criticisms of these standards leveled by the Fordham Institute reviewers—almost exclusively that these Florida standards do not teach mathematical problem-solving—incomprehensible.”
Response: None of our recommendations even mentions “mathematical problem-solving.” However, we do recommend a greater emphasis on mathematical reasoning and conceptual understanding. As we explain, “conceptual understanding in mathematics is as critical as procedural fluency because it supports long-term retention and future learning.” We also point out gaps in how teaching of place value is described in the B.E.S.T. standards; the desirability of paying more attention to computing flexibly, as well as by algorithm; and the need for an improved structure that helps teachers make connections between the emphases for each grade and the associated standards.
2. Milgram: “Also note the careful and usually challenging examples that are present, not only in these selected examples, but also throughout the standards.”
Response: We flat out disagree with this blanket endorsement of every example. Our review points to many examples that are ambiguous and even incorrect. We flagged so many problematic examples, in fact, that we did not include them all in the published review (but make them available via email).
From our review:
The standards contain examples, but they are often not helpful. Many are for the easiest benchmarks to understand. The example for a fifth-grade benchmark (MA.5.AR.2.1) states, “The expression 4.5 + (3 × 2) in word form is four and five tenths plus the quantity 3 times 2.” In other places, an example is not provided where it would be especially helpful—for example, to enhance grade 9–12 benchmark MA.912.AR.2.1, which reads, “Given a real-world context, write and solve one variable multi-step linear equations.” Some examples are not good illustrations of the concepts, such as this example for grade 6 benchmark (MA.6.NSO.1.4): “Michael has a lemonade stand which costs $10 to start up. If he makes $5 the first day, he can determine whether he made a profit so far by comparing |−10| and |5|.” Several examples provide “real-world problems” but do a poor job of illustrating anything real world. The example for grade 7 benchmark MA.7.AR.4.5, in which students are to calculate how many tanks of gas it takes to drive a car from Florida to Maine, implicitly assumes that the driver drives until his tank is empty and never leaves the road and ignores the issue of significant digits. (pg. 25)
Our recommendation is to revise the examples and add many additional examples that are more carefully constructed.
3. Milgram: “Our [my] review finds that Florida’s B.E.S.T. standards correctly focus on clear goals for procedural fluency appropriate for much of the K–12 curriculum, rather than empty and inappropriate ‘problem-solving’ skills.”
Response: The best evidence shows that coupling procedural fluency with conceptual understanding best prepares students for higher levels of mathematics. Thus, our review calls for an increased emphasis on conceptual understanding in the benchmarks at all grade levels, as a complement to procedural fluency.
We also recommend adding benchmarks that call for students to compute flexibly—selecting the best approach for each specific problem. We urge revision of the Mathematical Thinking and Reasoning (MTR) standards to provide greater emphasis on mathematical reasoning and proof, integrating those standards with the content standards and benchmarks. And, in particular, adding benchmarks that require students to explain their reasoning to ensure understanding and set the stage for students to process new information.
All of this clearly does not equate to “empty and inappropriate problem-solving skills.”
To state the obvious, our reviewers take seriously their charge to review state standards rigorously. So we aren’t troubled by being called tough graders. That’s why, until our reviewers’ key recommendations are addressed, Fordham will consider the new Florida standards a weak replacement for what the state once had.
 Incidentally, Steiner and Berner use the Next Generation Science Standards as evidence that disciplinary literacy standards should be included in the various content standards themselves. Yet, those standards were explicitly designed with the understanding that states had already adopted the CCSS. To wit, any communications standards the NGSS included were meant to be used in addition to or an extension of the CCSS disciplinary literacy standards.