By Robert Pondiscio
Were there any shootings at your workplace last year? Want some time to think about it? Better check the files or ask the H.R. department. Maybe you were out that day or forgot. Are you really, completely, hand-to-God, one hundred percent certain you know for a fact whether there was or was not a shooting at your work in the last twelve months?
It’s not a trick question. Of course you’re certain. If someone had fired a shot in anger in your office, factory, or school, it’s something you wouldn’t forget quickly. Or ever.
If you missed the news in the run-up to the Labor Day holiday, Anya Kamenetz of NPR committed a remarkable act of journalism last week. According to U.S. Education Department data for the 2015–16 school year, 240 schools reported at least one incident involving a school-related shooting. “NPR reached out to every one of those schools repeatedly over the course of three months,” Kamenetz reported, “and found that more than two-thirds of these reported incidents never happened.” Of the 240 incidents reported by the U.S. Ed Department’s Civil Rights Data Collection (CRDC), NPR was able confirm only eleven.
Eleven versus 240 is not, to put it blandly, a small discrepancy. Kamenetz cited a separate investigation by the ACLU of Southern California, which could also verify fewer than a dozen of the incidents, while confirming an astonishing 59 percent error rate in the CRDC data, which also informs our views and policy positions on things like chronic absenteeism—over six million students reportedly missed fifteen or more days of school—and corporal punishment. (Aside to Ms. Kamenetz: If NPR wants to continue its valuable service playing MythBusters, track down some of the 110,000 students the CRDC data claims were subjected to corporal punishment. I’ll wager heavily that number is wildly overstated, misreported, or based on an impressionistic definition that most of us wouldn’t recognize or accept as “corporal punishment.”)
If an incident as rare and binary as a shooting—either a gun was fired in a school or it was not—is challenging to recall, record, and report accurately, then how much confidence should we have in OCR statistics or other data on attendance, access to rigorous coursework, special education services, incidents of bullying, and the myriad other data points that inform education policy and practice? Do we even want to talk about charged and volatile issues like school suspensions and disparate impact? If we are relying on 100,000 public schools to accurately capture, code, and self-report a wide spectrum of critical data, it is not unreasonable to wonder what else they get wrong. And how wrong.
The Data Quality Campaign, a thirteen-year old nonprofit that advocates for improved collection, availability, and use of high-quality education data, was sufficiently alarmed by the NPR report to issue a statement noting that good data “takes time to collect and report accurately…Because the information used in the NPR story is from the first year that all schools were required to report information as part of the CRDC, it still needs the benefit of time to ensure that the data is both accurate and reliable.” Most school-level data is self-reported, explains Paige Kowalski, the organization’s executive vice president. Much depends on the person responsible for data entry in a school (seldom a full-time responsibility). What is their job? Do they understand how this information is used and why it might be important to do it accurately? Are there stakes attached to getting it right? “It gets down to what does it take to get quality data, what’s the role of the feds, what’s the role of the state, what’s the role of the district?” she explains. “And finally, that person who sits in the school who has the phone on their ear, typing with one hand, talking to a student across the way, and eating their lunch all at the same time.”
Kowalski recalls being astounded early in her career that "states didn’t have an accurate count of the number of boys and the number of girls enrolled in their schools because it was up to the district to determine data fields and definitions. We couldn’t get gender right,” she tells me. “But once we understood why, it wasn’t complex to fix.” Another frustration was over the wildly different ways four-year high school graduation rates were counted and calculated. The challenges, she says, tend to be a function of how questions are worded, data collected, coded, and reported. Reasonable explanations, but they don’t bolster the public’s confidence that we are on firm ground when we make confident assertions about “what we know” based on data.
The eyebrow-raising NPR report caps off a bad few months for data, research, and evidence-based practice. Researchers failed to replicate landmark studies like the “marshmallow test” of delayed gratification. The validity of the widely cited “30 million word gap” study of home language has come under suspicion. Last month Jay Greene of the University of Arkansas lit a small brush fire that deserved to be bigger calling into question whether political bias affects what research gets published in leading journals. (Spoiler alert: It does). Earlier this year, a feel-good story about graduation rates at a high-poverty Washington, D.C., high school turned out to be mostly bogus. We continue to tout “historic” high school graduation rates having no idea whether kids are graduating college and career ready, or being merely kicked up and out via various credit recovery schemes (I know which way I’m betting).
It’s not surprising when data with stakes attached, like graduation rates, are off; schools have every incentive to report data casting themselves in the most favorable light. It’s harder to explain away getting something like school shootings egregiously wrong, neither noticing nor caring when the data don’t pass the smell test. Credulousness is surely a factor. The lines between education research, advocacy, activism, and agenda-driven media coverage get blurry at times, increasing the likelihood that we will either actively promote or fail to apply appropriate skepticism to “policy-based evidence-making.” If you favor strict gun control measures to combat school shootings, for example, then an epidemic level of incidents reaffirms your sense of crisis and advances your narrative and prescription. If the idea of arming teachers to stop school shooters alarms you, it behooves your policy argument to note how vanishingly rare are such events. Making decisions on behalf of children is muddied even further with wholly invented data and statistics wielded by the nakedly self-interested pushing for changes in curriculum, pedagogy, and practice. You’ve surely heard, for example—and perhaps repeated—that we must be radically transform schools, since 85 percent of the jobs that today’s students will do as adults haven’t been invented yet. Or maybe it’s 65 percent. No, wait. It’s 60 percent. But such imprecision hardly matters when one-third of all jobs are about to be automated. Or one-half. Or whatever.
The crisis of confidence in data, if that’s what this is, ostensibly benefits the testing-and-accountability wing of ed reform, since, as my colleague Mike Petrilli notes, it’s hard to misreport test scores. It also bolsters the arguments of those of us inclined to weigh more heavily parental prerogative, including greater latitude for school choice. If we can’t trust our data to tell us what’s going on in schools, what parents see with their own eyes is less easily dismissed. But the bottom line is this: If your pet reform policy, program, or initiative rests on “what we know” based on school-reported data, this might be a good time to change the subject.
Overall, mathematics standards in the United States are far stronger today than they were in 2010, when Fordham conducted its last fifty-state review. And much of that improvement is due to the Common Core math standards, which earned a rating of A- in our 2010 report and a score of 9 out of 10 in our most recent review. In general, the states with the strongest math standards are the ones that have built on the Common Core, modified it in minor ways, or independently drafted separate standards that mirror its pacing and organization.
So why are today’s standards better than the math standards of a decade ago? Here are four strengths that our expert mathematics reviewers found in state math standards in 2018.
1. Stronger focus on arithmetic in grades K–5
Because it is the foundation for much of the mathematics that students will encounter in higher grades, experts agree that arithmetic should be the primary focus of math instruction in grades K–5. Yet in 2010, the biggest problem we identified in state math standards was that arithmetic wasn’t a sufficient priority. As mathematicians Steven Wilson and Gabrielle Martino lamented at the time:
Many states include solid arithmetic standards, but these are buried among a multitude of distracting and less important content... By failing to clearly prioritize this essential content, states fail to ensure that it gets the attention it deserves. Only a few states either explicitly or implicitly set arithmetic as a top priority. More often, states devote fewer than 30 percent of their standards in crucial elementary grades to arithmetic.
Thanks in large part to the Common Core, that is no longer true. To the contrary, a clear focus on arithmetic is now evident in most states’ K–5 math standards. For example, most states’ standards begin with a clear focus on counting, whole numbers, and place value. And most also expect students to know their single-digit addition, subtraction, multiplication, and division facts—and to be proficient with the standard algorithms for these operations, as well as strategies related to place value and the properties of operations (usually by the end of third or fourth grade, depending on the operation and the expectation). Finally, most states systematically develop a strong understanding of fractions and decimals.
To be clear, topics such as geometry and measurement, the representation of data, and algebraic reasoning are also included in most states’ elementary standards. However, in strong standards these topics are connected to number and operations—enhancing rather than diluting the focus on arithmetic.
2. More coherent treatment of proportionality and linearity in middle school
The study of fractions is closely tied to proportional relationships and reasoning (i.e., rates and ratios). And such reasoning, in turn, provides students with a platform for understanding slopes and linear relationships (e.g., y=mx+b), which are a key foundation for algebra. Thus, the sequence and pacing of these topics is critical to helping students move from elementary to middle to high school mathematics.
In recent years, the treatment of all of these topics has improved in many states. For example, in most states that used the Common Core as a starting point, ratios and proportional relationships is a main topic in grades six and seven, slope is developed in grade seven, and linear equations are an important part of grade eight, where they are both analyzed and used to describe linear relationships for bivariate data.
3. Appropriate balance between conceptual understanding, procedural fluency, and application
In the past, math experts quarreled over the relative importance of students’ conceptual understanding, procedural fluency (or ability to compute quickly and accurately), and ability to apply what they have learned. Yet, as the 2008 National Math Advisory Panel noted in its final report:
To prepare students for Algebra, the curriculum must simultaneously develop conceptual understanding, computational fluency, and problem-solving skills. Debates regarding the relative importance of these aspects of mathematical knowledge are misguided.
Thankfully, judging from their current math standards, most states have embraced the importance of each of these capacities and the implicit compromise represented by the quote. For example, the introduction to the Common Core states that “mathematical understanding and procedural skill are equally important” while also asking students to “make sense of problems and persevere in solving them.” This tripartite mission is also evident in the standards themselves. For example, most states now ask students to explain their reasoning, in addition to performing computations and solving problems. And in addition to standards about formal mathematical proof and carrying out mathematical procedures accurately, most states’ high school frameworks now include modeling, which links classroom math and statistics to everyday life, work, and decision-making.
4. Better organization and teacher supports
Well-organized math standards do at least two things. First, they provide an account of key themes for each grade level or course, as well as a list of major benchmarks to ensure that instruction is appropriately focused. Second, they are organized in a mathematically coherent way that highlights how mathematical topics fit together within a grade or course and how they are connected to prior and future work.
The Common Core math standards are a clear example of well-organized standards. For example, prior to the content standards for each grade level (K–8), there is an introduction describing the focus for the grade and a bulleted list of critical topics. Similarly, each high school domain (or area of math) includes a narrative introduction, followed by the individual standards for each of the clusters in that domain. In general, the organization of the Common Core into domains and clusters provides teachers and other stakeholders with conceptual cues about the connections between individual standards and the intended learning progressions within and across grade levels. And helpfully, states such as California and Massachusetts have extended these positive features to high school courses, a step other states should also consider.
In addition to content standards, most states have also adopted practice or process standards, reflecting the broad consensus among math experts that there are certain “mathematical habits of mind” that educators at all levels should seek to develop in students. For example, the Common Core includes eight “Standards for Mathematical Practice,” abbreviated versions of which are listed in the introduction to each grade (K–8) and high school domain. And again, states such as Massachusetts have helpfully expanded on this approach by articulating particular expectations for each of three grade spans: pre-K–5, 6–8, and 9–12.
Finally, most states now include a mathematical glossary in their standards, as well as other resources and links. The form and content of these are too diverse to summarize here, but many are likely to be useful for teachers. For example, a number of states have developed “vertical alignment charts” that describe the desired progressions for particular topics across grades, and there is a “coherence map” for the Common Core that shows connections across both topics and grades.
As others have noted, strong math standards are just the beginning. To implement them well, policymakers, curriculum developers, principals, and, above all else, teachers must understand why they are strong so that textbooks, professional development, pedagogy, and practice reflect the same shared vision of mathematical excellence.
This editorial was first published by the New York Daily News.
New York City’s eight selective high schools are rightfully sought after. Most consistently rank near the top of U.S. secondary schools. Their alumni include multiple Nobel laureates. Their graduates garner bountiful acceptance letters from Ivy League universities and go on to become the innovators, job creators, scientists, and leaders of tomorrow.
Each year, tens of thousands of eighth graders seek admission, which for decades has been based solely on whether an applicant gets above a cut-off score on the city’s Specialized High Schools Admissions Test (SHSAT). Only a few thousand make the cut.
But the racial profile of those admitted does not remotely mirror the diversity of the city’s population. Black and Hispanic youngsters comprise 67 percent of New York City students, but just 10 percent of those who attend the eight elite public high schools. Various efforts have been made over the years to fix this, but they’ve only made small dents.
Mayor de Blasio recently proposed his own remedy: overhauling the admissions process. He would scrap the SHSAT, which is taken only by students seeking admission to the specialized high schools. Instead, he’d use New York State’s standardized test results plus class ranks to select students for the specialized schools. He’d admit the top students from each middle school based on these broadened criteria, provided they’re among the top students citywide.
His proposal—if implemented correctly—could truly benefit high achievers from disadvantaged backgrounds.
Critics say admitting youngsters who don’t get top marks on an objective admissions test will erode the schools’ quality. That’s a fair concern, as is the introduction of more subjective elements into what’s already an extremely fraught process.
But all these issues extend well beyond Gotham. All across America, other programs meant to challenge high-achieving students struggle to devise entry arrangements that don’t shut out youngsters who are capable of outstanding performance but come from disadvantaged backgrounds.
And through all these efforts, there have emerged two proven ways to safeguard elite schools’ excellence while also giving these children a better chance to attend schools that truly challenge them and maximize their potential. de Blasio’s plan contains elements of each.
The first is screening every student using a universal assessment that almost everyone takes—like the New York State tests that the mayor proposed—rather than relying on a separate exam that families must seek out. Broward County, Florida, employed this approach, and it worked really well for poor and minority youngsters, report economists Laura Giuliano and David Card.
The second is analyzing scores at the school level instead of the district level, so that able kids in every “feeder” school get a fair shot at the prize. This diversifies the qualifying populations across communities in a way that doesn’t favor advantaged kids as much as district-wide competitions tend to do. That’s why the University of Texas offers admission to the top 7 percent of graduates of every high school in the state rather than the top 7 percent statewide.
The problem with de Blasio’s plan, however, is that many details are yet to be decided, so botched execution could result in the lowered standards that many fear.
To protect against this, and to ensure that the selection criteria are fair and reliable, state lawmakers should codify a new admissions formula that includes a mix of test scores and grades from both seventh and eighth, but that preserves the central role of external assessments in admissions decisions.
Attaching high stakes to letter grades could lead to unintended consequences, such as influencing students to take easy classes and inflating grades.
New York City has long failed its high achievers from disadvantaged backgrounds. They seldom get the opportunity to attend challenging programs with bright peers, and their potential is rarely maximized. So Mayor de Blasio’s reforms could be a step in the right direction—if they’re implemented well.
Editor’s note: Look for a rebuttal to Tyner and Wright’s argument in next week’s Gadfly, from Fordham’s own Chester E. Finn, Jr.
Most states are now including a measure of student absenteeism in their Every Student Succeeds Act (ESSA) accountability system for the so-called “fifth indicator” of student success, so many districts are now keen to strengthen student attendance. The results of a recent study could help.
Researchers at Harvard and the University of California, Berkeley examined the results of a randomized experiment where parents were provided with information about their child’s absences (or not) to see if the information intervention actually reduced chronic absenteeism.
The study was conducted in the School District of Philadelphia, the eighth largest in the U.S. The sample included parents of over 28,000 high-risk kindergarten through grade twelve students. Analysts defined high-risk students as those absent three or more days more than the modal student in their school at their grade level, and those with no more than over two standard deviations more days absent than the mean student in their school-grade. (The districts believed these students had most likely left and not informed them or were experiencing a grave challenge that would make them less responsive to treatment.)
Analysts randomly assigned households in equal numbers to a control group or one of three personalized treatment conditions, whereby they received up to five mailed postcards throughout the 2014–15 school year. One card was a generic reminder that students fall behind when they are absent and parents can help with absences; the second added special information about their child’s total absences; and the third added yet another data point that included the modal number of absences among their child’s classmates for relative comparison purposes. The control group received no other communication other than typical school communications like report cards.
Compared to the control group, analysts found that students in the “total absences” condition were 10 percent less likely to be absent and students in the “relative absences” condition were 11 percent less likely. Students in the “generic reminder” group were just 8 percent less likely to be absent, demonstrating that providing the additional information helped. They also analyzed whether siblings in the household were impacted and found that, among those students assigned to the generic reminder, there was no evidence of spillover effects on the sibling. But for those students receiving the other two types of postcard reminders, the spillover effects on the siblings were nearly as large as the effects for the “focal” student.
They find no evidence that treatment effects varied by student grade level, gender, race, or total absences in the previous year. And they were unfortunately unable to assess whether the invention impacted standardized test scores.
Finally, they surveyed parents to confirm they could remember receiving the postcards. A majority responded affirmatively. The survey also showed that parents did not change how they viewed the importance of reducing absences if they had received the generic reminder card versus the other two types.
Analysts hypothesize that parents may not realize how many days their kids are actually missing across the span of a full school year as those absences accumulate. Hence the more informative cards adjusted their perceptions.
Let’s hear it for yet another study showing that inexpensive informational nudges can make a big difference in influencing student or parental behaviors. After all, high-dollar, high-stakes interventions don’t always work out.
SOURCE: Todd Rogers and Avi Feller, “Reducing Student Absences at Scale by Targeting Parents’ Misbeliefs,” Nature Human Behaviour (April 2018).
In response to No Child Left Behind and Race to the Top, states including Georgia restructured teacher evaluation criteria to include both rigorous principal-conducted evaluations and student test scores. Andrew Saultz conducted an exploratory study in Georgia for the American Enterprise Institute to identify patterns in teacher dismissals and their relationship, if any, to teacher quality.
Saultz gathered 136 teacher dismissal cases from three of the six largest Georgia school districts: Fulton, DeKalb, and Atlanta. These records contained the recommendations of the tribunal reviewing each case, the state board of education’s final decisions, or both, stating the main cause for termination, as well as other offenses and an optional explanation. These causes came from a predetermined list outlined in Georgia’s Fair Dismissal Act. Saultz analyzed the results to understand patterns in Georgia’s teacher dismissals, looking specifically for mentions of teaching and/or teacher quality; so he broke down the results by main cause of termination and whether that cause is linked to teaching and/or teacher quality.
“Willful neglect of duties” was the most frequently cited cause for terminations, with 38 percent of cases labeled as such. It includes transgressions like “failure to complete lesson plans” and “failure to report to work.” The second most frequently cited fireable offense was “incompetence,” which appeared as the primary cause for 29 percent of cases, and usually relates to failures in performing job-specific duties, such as inadequate student records, low performance on student assessments, and failure to improve instruction.
Yet the study found that only six of the 136 cases for dismissal, or 4.4 percent, mentioned teacher effectiveness, teacher quality, instruction, or student learning. For terminations due to “willful neglect of duties,” for example, in only one of the fifty-one cases did records say anything about a teacher’s teaching. For incompetence, it was just five of thirty-eight. No other firings—be they for insubordination, failure to maintain proper training or certification, staff reductions, or something else—mentioned teaching in any capacity. And of the six cases that did, none said anything about teachers’ failure to implement strategies their districts recommended to improve instruction—which might indicate that even these instances had little to do with instructional quality, or that districts did little to try to correct deficiencies.
The study is limited by sample size. Only three school districts were included, each of a similar size and each in Georgia. The clear patterns it establishes might therefore not apply to smaller and more rural districts in Georgia, or school systems in other states.
Yet districts may be to blame for the bigger limitation: the lack of information recorded in dismissal cases. The collected case files were usually generic, constrained to Georgia’s eight formal causes. And a couple proceedings omitted a dismissal reason entirely.
If one of the purposes of teacher evaluations is to identify teachers’ instructional deficiencies and correct them—and it should be—these results suggest Georgia is failing mightily to fulfill that purpose. We ought to investigate whether other states are, too.
SOURCE: Andrew Saultz, “What Does One Do to Get Fired Around Here? An Analysis of Teacher Dismissals in Georgia,” American Enterprise Institute (June 2018).
On this week's podcast, Laura Jimenez, a director at the Center for American Progress, joins Mike Petrilli and David Griffith to discuss the state of the high school diploma and whether it should align with college readiness. On the Research Minute, Amber Northern examines how reforms in New Orleans affected teachers’ perspectives on learning and work environments.
Amber’s Research Minute
Lindsay Bell Weixler et al., “Teachers’ Perspectives on the Learning and Work Environments Under the New Orleans School Reforms,” Educational Researcher (July 2018).