By Kathleen Porter-Magee
Last week, I had the privilege of visiting several high-poverty urban schools in Cleveland. Each was serving some of the nation’s most disadvantaged students and beating the odds by arming their pupils with the knowledge, values, and skills they need to succeed.
Whenever I visit a school, I look for the unplanned things that give you a window into hidden vibrancy or challenges in the community. During my visit to one school last week, two unplanned interruptions stood out. First, the assistant principal received a call from her middle school social studies teacher to share some good news: A group of their seventh graders won first place in the John Carroll University “We the People” Call for Action and Social Justice Program, the school’s third first-place victory in as many years.
Not long after, an upper-elementary math teacher stepped out to take a call from the Cleveland Cavaliers. It seems that one of her students is the only student in Ohio to be chosen for the NBA Math Hoops competition, and its organizers wanted to let her know that they were sending the Cavs mascot to cheer on the student.
Was I getting a VIP tour of the latest hot new charter management organization? No, it was just another day for St. Thomas Aquinas, a small Catholic school that, without fanfare or much in the way of resources, continues to work miracles, serving predominantly low-income voucher students in a neighborhood Fordham recently dubbed a “desert.”
The new Fordham report America’s Charter School Deserts has gotten well-deserved attention, in part because it raises an important question about whether charter advocates have succeeded in their quest to expand high quality options for our nation’s neediest families. The report comes at a time when charter growth has slowed, or even stalled, in many places. For those dedicated to expanding charter schools, it serves as wake-up call and a call to action.
At the same time, words matter. You have to wonder what it says about the state of a broader school choice movement that ostensibly embraces both public and private options when a landscape full of often academically rigorous and life-changing Catholic schools is called a “desert.”
In fact, a quick analysis of just a few of the Fordham-identified “deserts” in large urban areas reveals that there are vibrant Catholic schools in nearly every one:
In many of these so-called deserts, Catholic schools are being overlooked—or even abandoned—as choice leaders focus on charter schools as the only viable option for expansion and growth.
As I think about these Catholic schools—those that significantly improve student lives and communities despite the challenges they face—I am reminded of the famous scene from “Monty Python and the Holy Grail” where, during a bubonic plague epidemic, a coroner is walking through a village with a wheelbarrow calling “bring out your dead!”
One villager brings out a man, slung over his shoulder. Just before the carried man is added to the wheelbarrow, he lifts his head and says, “I’m not dead! I’m getting better!”
Villager: No, you’re not. You’ll be stone dead in a moment.
Coroner: Oh, I can't take him like that. It's against regulations.
“Dead” man: I don't want to go on the cart!
Villager: Oh, don't be such a baby.
Coroner: I can't take him.
“Dead” man: I feel fine!
The punchline is that the coroner eventually kills the man to end the dispute. When you are a coroner, it’s a lot easier to deal with the dead then the living.
As I have more and more opportunities to visit thriving urban Catholic schools that are embracing change and driving life-changing results for disadvantaged students, I can’t help but feel like the dead man in the Monty Python scene. After all, as we lament the lack of choices in charter “deserts,” there exists a vibrant network of community-led urban Catholic schools that are regularly beating the odds on behalf of both their students and their communities.
Too often, though, we fracture the choice landscape by declaring these schools stone dead. We tell talented, entrepreneurial young leaders to look elsewhere for opportunities. We eulogize the sector without acknowledging the evidence that it is getting better.
To be sure, the challenges urban Catholic schools face are real. Take St. Thomas Aquinas in Cleveland, where 100 percent of the student body is African American and 99 percent of students receive a voucher from either the Cleveland or Ohio EdChoice scholarship programs. Because state law prohibits the school from charging tuition in excess of the $4,600 provided by the scholarship, the school has to choose between opting into the voucher program—which provides an enormous benefit to the community it serves—or charging tuition and putting the burden of financial sustainability on families who barely have enough to scrape by. That means that the school is resource-strapped by design, not by default.
And many of these schools have room to expand. But it’s almost as if we have convinced ourselves that these oases are merely a mirage—and so, like the Monty Python coroner, we unintentionally deal the Catholic schools sector a death blow in spite of growing evidence of their benefits.
Of course, if this is our future, we have to accept some responsibility for our fate. As Catholic school leaders, it is our job to embrace competition and transparency for results, to stand up and proudly defend successes, and to pave a path to sustainability, even as we face challenges and adversity.
The good news is that there are trailblazing Catholic leaders around the country who are doing just that. Archdiocesan leaders like Tim Uhl of Montana, Tim McNiff in New York, and Jim Rigg in Chicago. And national groups like Cristo Rey, Milwaukee’s Seton Catholic Schools, the CUSP schools in Newark, Philadelphia’s Independence Mission Schools, Notre Dame’s ACE Academies, and our own Partnership Schools. We are all working to find ways to build sustainable, academically rigorous, and faith-filled Catholic schools in these charter school “deserts.”
There is a heavy burden on those of in the Catholic school world to demonstrate our schools are worth saving and then make sure everyone in the school choice movement knows about these successes.
But it’s also critical for those who support parent choice to acknowledge that diverse choices—including non-governmental private and faith-based schools—strengthen the entire landscape. And that the key to making a desert bloom is to start with educational oases that are already helping so many children thrive.
Kathleen Porter-Magee is the superintendent of Partnership Schools.
The views expressed herein represent the opinions of the author and not necessarily the Thomas B. Fordham Institute.
For weeks now, I’ve been debating Patrick Wolf, Michael McShane, and Collin Hitt about the relationship between short-term test score changes and long-term student outcomes, like college enrollment and graduation. Most recently I proposed three hypotheses that those of us who support test-based accountability—for schools of choice and beyond—would embrace. Now let’s see how the evidence stacks up against them.
To be clear, this is a slightly different exercise from asking whether test-based accountability policies lead to stronger outcomes in terms of student achievement. That’s an important endeavor too, and studies like Thomas Dee’s and Brian Jacob’s evaluation of accountability systems under No Child Left Behind indicate that the answer is yes.
But that’s not quite what we’re after, because those studies show that holding schools accountable for raising test scores…results in higher test scores. What we want to know is whether higher test scores—or, more accurately, stronger test score growth—relates to better outcomes for students in the real world.
So let’s take it one hypothesis at a time.
1. Students who learn dramatically more at school, as measured by valid and reliable assessments, will go on to graduate from high school, enroll in and complete postsecondary education, and earn more as adults than similar peers who learn less.
You would think that there would be lots of studies looking at students’ learning gains in elementary or middle school and how that impacts their high school graduation or college enrollment rates. Yet to my knowledge none exist. (Academics: Let’s change that please!)
What we do have is the famous Raj Chetty et al. study examining teacher value-added, which found that students who learn more in elementary school earn more as adults. It’s just one study, but it’s a remarkable finding, one that might be hard to replicate unless more scholars can gain access to the tax data Chetty and his colleagues have.
2. Elementary and middle schools that dramatically boost the achievement of their students should also boost their long-term outcomes, including high school graduation, postsecondary enrollment, performance, and completion, as well as later earnings.
Here we have a bit more to go on, at least if we look at studies that examine both individual schools and programs that are focused at least in part on elementary or middle schools. Remember that we’re interested in schools or programs that make a significant impact on achievement, for good or ill. According to Hitt, McShane, and Wolf’s review, there are four of those. I will use their words to describe the results:
- Harlem Promise Academies, which had “a positive and significant impact on math scores, a positive but an insignificant impact on high school graduation, and a positive but insignificant impact on college attendance.” (Also: Admitted females were 12.1 percentage points less likely to be pregnant in their teens, and males are 4.3 percentage points less likely to be incarcerated.)
- “No Excuses” Charter Schools in Texas, which “produced significant gains in ELA and math scores and in high school graduation rates.” They also had “a small and statistically insignificant impact on earning,” according to the study itself.
- Boston Charter Schools, which had “positive and significant effects on language arts, positive and significant impacts on math scores, negative but significant impacts on high school graduation rates, and positive but insignificant impacts on college attendance rates.” Here we have our first hint of a mismatch. However, the negative finding for graduation disappears if we look at five-year graduation rates, lending credence to the theory that the city’s no-excuses, high expectations charter schools are making their students take more time to graduate, while boosting their achievement. Students were also more likely to enroll in four-year universities, where low-income students tend to earn credentials at higher rates.
- Other Charter Schools in Texas, which “produced significant gains in high school graduation rates, despite having negative but significant impacts on ELA and math scores.”* Here we have our first true mismatch. However, according to the study, attendance in these schools was also related to a decline in college enrollment rates and lower earnings as adults. In other words, this study actually bolsters the case for test-based accountability, while undermining the case for high school graduation rates.
So what to make of these studies? Positive and significant impacts on student achievement in the Harlem Success Academies, Boston charter schools, and “no excuses” schools in Texas were related to positive but statistically insignificant impacts on high school graduation and/or college enrollment rates. Harlem Success also had a positive impact on teen pregnancy (for girls) and incarceration rates (for boys); Boston charter schools had a positive impact on enrollment in four-year versus two-year colleges; and “no excuses” charters in Texas had a positive but insignificant impact on earnings. Meanwhile the other charter schools in Texas saw negative impacts on test scores and negative impacts on college enrollment and earnings.
Any fair reading of this research would acknowledge a strong relationship between test score impacts and long term outcomes. If there is anything to worry about here, it is the disconnect between high school graduation and college enrollment and earnings that manifests itself in the Texas study.
3. High schools that dramatically boost the achievement of their students should also boost their long-term outcomes, including postsecondary enrollment, performance, and completion, and earnings.
Here the research base is a tad larger. We can start with a 2016 study of Texas’s accountability system by all-stars David J. Deming, Sarah Cohodes, Jennifer Jennings, and Christopher Jencks, published in The Review of Economics and Statistics, and repackaged for a lay audience in Education Next, which found, in the authors’ words, that:
Pressure on schools to avoid a low performance rating led low-scoring students to score significantly higher on a high-stakes math exam in 10th grade….Later in life, they were more likely to attend and graduate from a four-year college, and they had higher earnings at age 25.
The second, a working paper by the University of Michigan’s Daniel Hubbard, finds, in his words, that:
Students who attend high schools with higher value added perform better in college, both in tested and untested subjects; a student who attends a high school one standard deviation above the mean level of value added will have first-year grades about 0.09 grade points higher than the grades of an identical student in an average high school. The effect remains positive and highly significant after a variety of adjustments to deal with selection into college and into high school. This result implies that schools with high value added are not earning those scores by teaching to the test or by reallocating resources toward tested subjects, but instead by preparing students effectively to perform well in the standardized test and beyond.
And what about the studies reviewed by Hitt, McShane, and Wolf? I count just two that fit with my hypothesis, in that they include significant findings for achievement; have data on either college enrollment or graduation; and aren’t looking at idiosyncratic models like early college, selective enrollment, or CTE. Let’s again use the AEI authors’ own words to describe the results:
- Chicago Charter High Schools, which “had a positive and significant impact on math scores…and a positive and significant impact on college attendance.”
- New York City’s Small Schools of Choice, which “had positive and significant impacts on ELA scores, and positive and significant impacts on high school graduation and college attendance.”
So what to make of the results for high school programs? All four show a clear match between achievement and college enrollment and/or performance. The Texas study found higher achievement led to higher earnings as well.
Where do I end up after rummaging through all of these studies?
- The research base is very thin—too thin for a serious meta-analysis. With only nine relevant studies, this is clearly a field still in its infancy.
- Almost all of the evidence we do have indicates that changes in test scores and in long-term outcomes match. In each of the nine cases, the student achievement impacts and the longest long-term outcomes point in the same direction. Not all of the long-term impact outcomes are statistically significant. But this is still a promising finding.
No doubt this debate will continue; we plainly need a lot more empirical evidence to inform it. In the meantime, the best studies we have indicate that test-based accountability is a smart approach, imperfect as it is, because students who learn more go on to do better in the real world. And yes, that’s what really counts.
* Chalkbeat's Matt Barnum pointed out that the gains in high school graduation rates were actually small and statistically insignificant.
The effectiveness of public schools in developing engaged citizens has rarely been examined empirically,” notes a new Mathematica report on the impact on civic participation of Democracy Prep, a network of charter schools that educates more than 5,000 students, mostly in New York City. Perhaps not, but it’s certainly been assumed. We remain sentimentally attached to a gauzy myth of the American common school ideal and its presumed role in citizen-making, even without evidence of its effectiveness.
The number of Democracy Prep alumni who are of voting age is relatively small. Founded in 2006, and with twenty-two schools in five cities, the network only graduated its first class in 2013. But Mathematica’s study, using the most conservative interpretation of its data, found that “Democracy Prep increases the voter registration rates of its students by about 16 percentage points and increases the voting rates of its students by about 12 percentage points.” As a summary from the American Enterprise Institute notes, “the raw numbers were even stronger, a twenty-four-point increase in both, which suggest Democracy Prep doubled its students’ likelihood to register and vote.”
Bravo, Democracy Prep. But as a former (and hopefully future) DP civics teacher, I will confess that this finding is not particularly surprising to me. Honestly, I would have been alarmed if it were not the case. As the Mathematica paper notes, Democracy Prep students participate in annual voter registration campaigns and other forms of direct civic engagement. Nearly every fall, students as young as kindergartners can be seen on the streets of Harlem registering voters; they are unmistakable in their distinctive bright yellow T-shirts with the slogan “I can’t vote, but you can!” High school seniors work all year on capstone “Change the World” projects wherein they research a social problem of interest to them and then plan and execute some manner of public response—a fundraising drive, a protest, an awareness campaign, etc. Students routinely offer testimony to representatives at all levels of government. Food drives, volunteerism, and “service learning” are encouraged. Passing the U.S. Citizenship Test is a graduation requirement. The class that I taught was a seminar in civics and citizenship for graduating seniors, in which we would wrestle with constitutional issues at work in students’ lives, from campus speech codes to “broken windows” policing. All told, it is impossible to spend your school-age years at Democracy Prep and not get the message that active and engaged citizenship is what’s expected of you, with voting a rock-bottom, core adult responsibility. Candidly, I’m uncertain that we did a better job than other high-performing schools at actually teaching civics. But without question, Democracy Prep does a better job valorizing it. It’s in the school’s name, for Pete’s sake: preparation for democracy.
If traditional public schools and districts want to reclaim the mantle of minting engaged and competent citizens, they have some valorizing work to do of their own. A few years ago, I did a small, informal research project examining the mission and vision statements adopted by the nation’s hundred largest school districts to see whether they still view the preparation of students for participation in democratic life as an essential focus of their work. The results were dispiriting. Civics and citizenship were not mentioned at all in in the mission statements of well over half of the districts surveyed. Their language suggested that school officials were much more focused on the private and personal outcomes of schooling—preparation for college and career, for example—than on minting public-minded adults who were prepared and motivated to vote, volunteer, donate, or any of the other activities common to active and engaged citizens.
One shouldn’t make too much of mission and vision statements, which are likely divorced from the day-to-day work of teaching and learning in our largest school districts. But it’s instructive to note that, when the men and women charged with overseeing the schooling of more than half of America’s children sit down to ask themselves, “What do we do here? What matters most?” the evidence suggests that the public dimension of public education is not very much on their minds. By contrast, the founder of Democracy Prep, Seth Andrew, was notorious for his habit of asking staff members at random to recite from memory the network’s mission statement: “to educate responsible citizen-scholars for success in the college of their choice and a life of active citizenship.”
Those of us who worry about the civic mission of schools spend a lot of time worrying about how to promote student civic engagement. Yet the more time I spend thinking about this, the more I wonder if we’re not getting this exactly backward. As the Mathematica report demonstrates, schools can be where students go to become civically engaged adults. But schools are invariably where students go to experience the civic engagement of others. No child thinks of it this way, but surely, he or she picks up clear signals about their place in the world, how they are regarded by authority figures who are not their parents, and how much—or how little—is expected of them. If the relationship a child has with a school is coercive, punctuated by frustration and failure, leading to no good end, then there is no reason to expect strong civic outcomes. Civic engagement tends to rise with educational attainment, which is another non-surprise: If our relationship with school is productive, successful, and contributes to a good life outcome, we are more likely to feel invested in civil society. This transcends mere voting. It is enfranchisement in the best and broadest sense.
Supreme Court Justice Potter Stewart famously observed that he could not define obscenity, “but I know it when I see it.” The same, I think, is true of the kind of school culture and climate that signals to children—particularly those historically least likely to have a positive experience of schooling—that their community and country are invested in them. The challenge for researchers is to identify this ineffable aspect of school culture and climate: Name it, quantify it, measure it, and help practitioners get better at replicating it, particularly as we fumble forward in trying to decide what outcomes schools ought to be held accountable for and how.
“Democracy Prep provides a test case of whether charter schools can successfully serve the foundational purpose of public education—preparation for citizenship—even while operating outside the direct control of elected officials,” the Mathematica report concludes. “With respect to the critical civic participation measures of registration and voting, the answer is yes.”
This is an encouraging finding, and bracing. My Democracy Prep friends and colleagues should walk a little taller. But the bigger prize lies ahead. The degree to which young people will choose to become and remain civically engaged—not just voting, but volunteering, fundraising, demonstrating agency and advocacy—is the degree to which they feel fully and truly enfranchised. There can be no more important mission for public education than attaching children to civil society, investing them in its functioning, and encouraging them to play a role in building a more perfect union.
Editor’s note: A version of this essay was original published in a slightly different form by The 74.
On this week's podcast, Paul Morgan, Professor of Education and Demography at Penn State University, joins Mike Petrilli and Alyssa Schwenk to discuss the evidence on racial disparities in special education identification and services. On the Research Minute, Amber Northern examines the effect of online versus paper tests on student achievement.
Amber’s Research Minute
Ben Backes and James Cowan, “Is the Pen Mightier Than the Keyboard? The Effect of Online Testing on Measured Student Achievement,” Calder (April 2018).
Yes, test scores affect long-term outcomes, even according to this study, despite what its authors mistakenly conclude
In a recent AEI meta-analysis of school choice attainment literature, Michael McShane, Patrick Wolf, and Collin Hitt use thirty-nine impact estimates from studies of more than twenty school choice programs to argue that standardized-test impacts are too unreliable to serve as the “exclusive or primary metric on which to evaluate school choice programs.” In their words:
Programs that produced no measurable positive impacts on achievement have frequently produced positive impacts on attainment. And on the other hand, null effects on high school graduation and college attendance have been reported from programs that produced substantial test score gains. Across these studies, achievement impact estimates appear to be almost entirely uncorrelated with attainment impacts.
Are they right about that last part? As avid Fordham readers know, my colleague Mike Petrilli has already criticized the authors’ methodology and conclusions at length. But for those of you who don’t have time for Mike’s six-part mini-series, here is my abbreviated critique.
First, for a study’s achievement and attainment estimates to “match” under the authors’ methodology, both the sign and their statistical significance of those estimates must be the same. So, for example, if one estimate is positive and the other is negative, they are considered mismatched, even if both estimates are tiny and statistically insignificant. Similarly, estimates are considered mismatched if one is significant and the other is insignificant, even if both estimates are positive (or negative) and similar in magnitude.
If you know any statistics—and the authors do—than you can spot the problem here. As they admit, a fairer approach would be to “simply examine the correlation between program effect sizes on each outcome.” But their actual approach effectively suppresses this correlation. For example, according to McShane, Wolf, and Hitt, English language arts achievement and high school graduation only match in thirteen of thirty-four instances—leaving twenty-one supposed mismatches. Yet according to the authors, only one of those studies found significant effects on achievement and attainment that pointed in opposite directions. (And even in this case, it sure looks like they got their facts wrong.) Meanwhile, seven studies found significant effects that pointed in the same direction. (I guess that’s what happens when two outcomes are almost uncorrelated.)
Second, the authors use their “findings” for program impacts to argue that all test-based accountability is flawed. But of course this is a massive non sequitur. As Mike noted in his second column, there is far more variation in the performance of individual schools than there is in the average performance of school choice programs. So even if achievement and attainment impacts weren’t strongly correlated at the program level (which they might be), there would still be a case for closing schools with extremely low test scores (which is what accountability hawks are actually suggesting).
On a happier note, although the authors’ analysis only includes studies that estimate both achievement and attainment impacts—thus excluding a large number of studies that only estimate the former—the news for school choice is overwhelmingly good. For example, of the thirty-four estimates in the authors’ sample that considered ELA achievement, eleven found a significant positive effect, while just three found a significant negative effect. Similarly, eleven estimates found a significant positive effect for math, while just one found a significant negative effect. And sixteen found a significant positive effect on high school graduation, while just two found a significant negative effect. Finally, nine of nineteen studies found a positive impact on college attendance, and three of eleven found a positive effect on four-year college completion. Yet no study found a significant negative effect on postsecondary attendance or completion. (Think about that for a second.)
In short, the study is full of good news for school choice advocates, and a careful reading actually strengthens the case for taking the achievement impacts of these programs seriously. (For example, ELA impacts seem to be a better predictor of long-term gains than high school graduation rates.) So it’s a shame that this whole conversation has been sidetracked by a shoal of red herrings.
Traditionally, when social scientists have chosen methods that fit their preferred conclusions, they have done so in secret. This report has the debatable virtue of transparency.
SOURCE: Michael Q. McShane, Patrick J. Wolf, and Collin Hitt, “Do Impacts on Test Scores Even Matter? Lessons from Long-Run Outcomes in School Choice Research,” American Enterprise Institute (March 2018).
Discussions about standards tend to focus on either the caliber of standards themselves or how well teachers understand them, but a third aspect of quality standards-based instruction is the support districts and schools give teachers to implement standards. Good standards-based instruction requires supports like aligned curricula and textbooks, professional development, and knowledgeable leadership. A recent RAND study finds deficiencies in two such supports: school leader knowledge of standards and the quality and alignment of classroom materials.
Researchers Julia Kaufman and Tiffany Tsai surveyed 1,349 members of the nationally representative American School Leader Panel (ASLP) in October 2016, and received responses from 422, or 31 percent. The survey asked what materials schools recommended or required in English language arts (ELA) and math, and compared responses to a report from EdReports, a nonprofit that has reviewed popular instructional materials for quality and alignment with the Common Core State Standards (CCSS). RAND researchers used these reviews to calculate a “percent alignment” for all EdReports-rated materials. The ASLP survey also asked questions to assess school leaders’ knowledge about approaches and content in key areas aligned with the CCSS (and most non-CCSS state standards): use of close reading and complex texts for ELA and grade-level content and balance between three areas of rigor (conceptual understanding, fluency, and application) for math.
Kaufman and Tsai find that many of the materials school leaders said were required or recommended are not aligned with standards. For example, a key instructional shift in the CCSS-ELA is the requirement that all students read complex texts at their own grade level. However, 64 percent of elementary school leaders and 22 percent of secondary leaders reported a requirement or recommendation to use “leveled readers,” an approach not mentioned in the CCSS-ELA. High percentages also reported use of guided reading series involving leveled reading, including Accelerated Reader/Renaissance Learning (35 percent of elementary principals and 30 percent of middle school principals) and RAZ-Kids/Learning A-Z (42 percent of elementary principals). In math, most of the top-mentioned materials had been criticized by EdReports as partially meeting or not meeting expectations in at least some areas or grades. Of the top ten math materials cited by principals, only Eureka Math was classified as meeting expectations in all categories and grades—and only 17 percent of elementary school leaders and 7 percent of secondary school leaders reported using it.
The study also finds that many school leaders do not have a strong grasp of the standards-related concepts assessed by the RAND survey. Less than half of school leaders said that requiring all students to read complex texts was aligned with their state standards, while the majority thought leveled reading was aligned. Notably, school leaders who reported using materials that met EdReports criteria were significantly more likely than others to recognize aligned concepts. School leaders were not consistently able to match the major standards-aligned math topics with grade levels, particularly beyond the elementary grades.
Results did not all show lack of support, however. School leaders had more success matching math standards with relevant areas of rigor, and 15 to 20 percent of school leaders reported using materials that met EdReports expectations for all categories and grade levels. As researchers expected, whether states had officially adopted the Common Core made a difference to the alignment of materials: In CCSS states, the average percent alignment was 60 for math and 72 for ELA, compared to 54 for math and 64 for ELA in non-CCSS states.
The study has several limitations. Any material not reviewed by EdReports, including eight of the top ten ELA materials, was omitted from the analysis, but some of those materials might be high quality or standards-aligned. The survey explored only a few aspects of school leader knowledge of standards, and, as researchers repeatedly remind readers, questions on standards are adapted from surveys originally intended for teachers, and may reflect the level of detailed knowledge needed by teachers, rather than school leaders. Because the survey does not ask whether schools have curriculum specialists or other content leadership in implementing standards, results may not reflect the level of knowledge and support actually available to teachers.
The report contributes substantially to the conversation about standards implementation. Researchers make the important point that principals cannot encourage teachers to faithfully implement standards (or assess implementation) without some grasp of the instructional shifts required. The issue of misaligned materials is particularly pressing, and the report should encourage schools and districts to consult independent reviews such as those by EdReports. Even strong standards lose value when not supported with knowledgeable leadership and aligned materials—an area where schools and their leaders clearly still need support.
SOURCE: Julia H. Kaufman and Tiffany Tsai, "School Supports for Teachers' Implementation of State Standards," RAND Corporation (2018).