By Adam Tyner and Brandon L. Wright
Upwards of 3.6 million high school seniors graduated this year, and most of them left twelfth grade with a reasonable complement of the knowledge and skills we expect of those taking their first steps into adulthood—be that college, career or technical training, military service, even a fruitful “gap year.” Most—but not all. Recent scandals in Washington, D.C., Maryland, and elsewhere have made clear that far too many young people are being shuttled through secondary school with little regard for whether they leave with the requisite skills.
Pressure to boost graduation rates plays a role—but so do complex motivations like empathy and fear for kids’ future well-being. It’s these latter impulses that lead folks to believe that easing expectations, at least for especially disadvantaged students, is a victimless act, maybe even a noble one. “These struggling students will be even worse off without diplomas!” such a person might declare. “So what if they missed some days of school? They made it this far, and we can’t mess with their futures. Let’s get them across the finish line.”
But step back a moment and you will see the harm. Awarding diplomas to students who miss weeks of school diminishes the credential’s meaning. And the excuses we have made for these near-passing students are built on slippery slopes: As we saw in D.C.’s Ballou High School and elsewhere, overlooking weeks of missed schools can easily turn into overlooking months. The Washington Post reported last week that the response of D.C. Public Schools to the scandal is to soften the official attendance policy so that more students can graduate, an example of “defining deviancy down.” In the end, the meaning of the diploma is lost.
This pervasive bar-lowering devastates opportunities for upward mobility. Poor and minority students already have a lot going against them. Many of the neighborhoods that feed Ballou, for example, have social and economic problems that make it hard to get on a college-ready track. And of course many students experience racism, too. But by letting almost anyone with a pulse graduate, schools like Ballou take away a key ladder out of these difficult circumstances.
For students who study diligently, turn in their work dutifully, and attend class regularly, letting others who missed half the year graduate might feel like a slap in the face. It’s like someone who scrimps and saves for years to build up a nest egg, only to have the government devalue the currency right before they retire. Employers might even question the qualifications of 4.0 students at schools like this, mindful that if one can graduate from them without even showing up, who knows how little it may take to get an A. In the end, the hard-working students at struggling schools might feel like chumps, and future classes may decide that serious academic effort isn’t worth the trouble.
The problem, of course, is much larger than just a few schools. Ironically, in the name of “equity,” the federal government has pressured states to raise graduation rates for at least a decade; states have in turn pressured school districts to do the same. District leaders push on principals, and principals on teachers. It’s not really a surprise that graduation rates are at a record high of 84 percent, up nearly 10 percentage points in ten years.
Raising graduation rates might sound like a good thing, but we aren’t holding those who oversee this diploma boom accountable for ensuring that those capped-and-gowned students actually deserved to graduate because they completed their courses and learned what they should. Pressuring schools to raise graduation rates is no different than forcing them to give students straight-As: Such a policy wouldn’t magically create a class of brilliant students; it would just make it much harder for others to tell who has achieved and who hasn’t. So long as students must meet external standards to graduate, encouraging schools to raise graduation rates can make sense. It puts students and their teachers on the same team, in shared combat with the external examiners. Yet that’s not happening. A minority of states requires high school exit exams or any other external validation of student learning. And in the backlash against the supposed evils of testing, some jurisdictions have recently dropped their external requirements for students.
The problem extends to course grades, too. Although some recent studies have shown that grade inflation has occurred more in affluent schools than in high-poverty schools, affluent students are harmed less by it because they often have many paths to success besides good grades. Rounding a B up to an A or a C to a B feels nice in the moment—“this student’s already done with my class, and they’ll be more likely to get into college with a better grade,” one might rationalize—but if colleges and employers can’t distinguish among students’ academic performance, they are more likely to fall back on personal connections, family ties, school reputation, and outright prejudice to make their decisions. This hits people from disadvantaged backgrounds hardest of all.
That’s why minority youngsters accrue greater premiums from academic signals of high achievement than whites, as other recent research has shown. To succeed in a country where the deck is often stacked against them, African Americans and other minorities get a special boost from academic signaling. Of course, we must work to eliminate prejudice in America, but educational institutions have a special responsibility to make these signals—whether grades or credentials—count for something.
Some well-meaning people argue that we must “consider the issue more widely,” and that, because many youngsters in schools like Ballou have grown up in concentrated poverty, we should be wary of criticizing the lowering of their academic bar. Besides exemplifying the “soft bigotry of low expectations,” however, and potentially encouraging students to accrue debt for college degrees they are unprepared to complete, lowering the bar for students who face additional challenges cheapens the success of classmates who make it past these challenges and still succeed. Worse, it demeans and cheapens every award of the once respectable high school diploma.
The octet of D.C.-area private school heads who boasted a few days ago that their pricey bastions of teaching and learning will no longer offer Advanced Placement courses made much of how the home-grown classes that will replace AP “allow for authentic engagement with the world and demonstrate respect for students’ intellectual curiosity and interests.”
That’s apt to resonate with the upper-middle class parents whose children fill most of the seats at places like Sidwell Friends, St. Albans, and Landon—and whose per-child tuition payments next year will mostly be north of $40,000. Their kids are apt to do fine in college and beyond, with or without AP. (They’d probably do fine with or without the pricey private education!) For the vast majority of American families, however, desperate for quality schooling and solid college prospects for their own children, this whole maneuver looks, well, snobby and smug.
It’s also slightly off-base and disingenuous. Off-base because, in contrast to the school heads’ assertion that AP courses “emphasize breadth over depth,” the College Board has been systematically overhauling and replacing its thirty-eight AP course frameworks and exams to emphasize concepts and “big ideas” as well as “essential knowledge,” and expects to complete that exacting process just about the same time these elite schools are repudiating Advanced Placement.
It’s disingenuous in part because the heads’ statement barely hints at what’s almost certainly their main motive in repudiating AP: Everybody has it nowadays, and it’s essentially free down the street. So offering AP no longer makes their schools distinctive—and worth the hefty cost. They must find other ways to be different.
And it’s disingenuous because, while the schools cease to list courses that are branded “Advanced Placement” and sanctioned as such by the College Board, they’ll keep administering the AP exams in May—and you can be sure that thousands of their pupils will continue to take them. (You can also be sure that teachers and school heads will monitor those exam results and agonize if they don’t include enough 4’s and 5’s.) That’s what’s happened at other high-status private schools such as Fieldston, Exeter, and Choate-Rosemary Hall, which made a big deal of foreswearing AP but whose students—and most definitely their tuition-paying parents—continue to take it seriously.
Complacent and a tad self-righteous as the school heads’ declaration reads, however, they’re not entirely wrong to forego some of AP’s hassles, such as its rigid exam calendar and insistence on approving teachers’ course syllabi. For as Advanced Placement has spread across American secondary education—with almost three million students sitting for more than five million exams last month—its value has diminished somewhat for kids at well-established schools like these.
The selective colleges to which most of their seniors apply are awash in candidates with AP courses and exam scores on their transcripts. While that information helps admissions officers appraising kids from little-known high schools in remote locales, those same officers already have databases on applicants from Georgetown Day School and Holton-Arms. They have a fair sense of what’s in those schools’ courses, what the transcript grades and class ranks signify, and how seriously to take their teacher recommendations. They can easily couple that information with scores on other tests and predict how a given applicant will fare on their campuses. Moreover, as those same selective colleges grow fussier about conferring degree credit and acceleration based on AP exam scores—the Harvard faculty recently voted to cease offering the option of starting as a sophomore—one of the original rationales for Advanced Placement wanes.
On the other hand, solid scores on AP exams are still used, almost everywhere, Harvard included, for college course placement. It remains possible to skip mass-lecture introductory classes by submitting evidence of having mastered that content in high school. And thousands of other colleges continue to rely on those exam scores as the basis for conferring degree credits and the possibility of shortening one’s time within the ivy walls—and reducing one’s total tuition hit.
Perhaps the biggest sacrifice being made by the D.C.-area private schools is the opportunity for their students’ work—and ultimately their teachers’ effectiveness and their own institutional value-add—to be judged impartially on a national metric that’s retained its rigor in a time of grade inflation and that’s scored anonymously by veteran high school teachers and college professors. Advanced Placement is about as close as American K–12 education has today to a gold standard—and as close as we come to a quality national curriculum at the intersection of high school and college. While independent schools are of course free to shun all such forms of standardization, the thousands of public and private schools that have embraced AP are enhancing their students’ access to assured educational quality and academic rigor. One day, the elite institutions that say they can do this better on their own may find themselves sorry that they scorned an approach that’s stood the test of time since 1955.
I wrote last month about states’ lengthy struggle to turn around low-performing schools. Most federally funded strategies have been unsuccessful, and states have hired specialists and retrained school staff instead of instituting fundamental reforms.
But states have the freedom under ESSA to try creative strategies to fix their worst schools, as Nelson Smith and I argue in the latest issue of NASBE’s The Standard. The law grants them significant deference, and $1.1 billion in the current fiscal year, to achieve this goal. States should wield these to adopt or adapt three approaches already in use for going beyond cosmetic remedies for troubled schools. And state boards are among the entities best poised to effect this change.
The first approach—a particularly promising one—is charter expansion, wherein schools identified for comprehensive or targeted support are replaced by or converted into charter schools. Second is a state turnaround district, in which the state withdraws control of struggling schools from their home districts and creates a state-managed entity that assumes responsibility for getting those schools to an acceptable level of performance over some period. And the third approach includes state-led but district-based solutions, where a state-appointed individual or entity essentially assumes plenary power over a district (or subdistrict) and decides what solutions fit each individual school.
Rigorous evaluations show that these efforts can improve student outcomes when done well. But they can fail to produce gains when not designed, implemented, or run effectively, or when they don’t match a locale’s unique characteristics. So state leaders must keep in mind that each approach has advantages and drawbacks, and they shouldn’t try to cut and paste a method that worked elsewhere without careful consideration of local context. Moreover, these methods are not mutually exclusive, and smart policymakers may adopt more than one approach to suit schools in different corners of their states.
Board members in many states are already operating with ample leeway to propagate these reforms. And for those who aren’t, every new state budget, every state superintendent’s contract negotiation, and every annual list of state board priorities is an opportunity to review progress and fill gaps. ESSA frontloads responsibility on local officials, so the challenge for state policymakers is to achieve the right sequence of carrot and stick. They must persuade, cajole, and challenge districts into taking effective action, but they also need local leaders to know they will use those “rigorous interventions” ESSA demands if failures persist. And when undertaking any of the aforementioned avenues toward better school-improvement strategies, state board members ought to keep a few basics in mind.
1. Follow the money. In a January 2018 letter to state education chiefs, the U.S. Department of Education said that states with existing School Improvement Grants—the maligned method of turnaround under No Child Left Behind—could either keep current plans and reporting in place or use those funds to support the range of possibilities afforded by ESSA’s more wide-open rules. State boards should monitor how old and new funding streams for school improvement are being used for maximum effect.
A second financial consideration is whether state education agencies are creatively leveraging ESSA’s mandated turnaround funding with two other discretionary funding streams: a voluntary 3 percent Title I set-aside that can be used for “direct student services” and a Title IV block grant for student support and academic enrichment.
2. Address the supply side. All of these models require extraordinary commitment to recruiting, hiring, and cultivating talent. Whether a state opts for chartering, a turnaround zone, or a partnership, schools need to be ably led and staffed with terrific teachers. And because schools in need of turnaround often cluster in particular cities and regions, it is important to build the whole talent pool and not just the district or charter portions of it. State boards should query how these plans are coming along. They should also ask whether officials are forming partnerships with organizations that specialize in human capital development—starting with local colleges and universities, but also including reform-focused nonprofits such as New Schools for New Orleans, the Mind Trust in Indianapolis, and the Tennessee Charter School Center.
3. Use the power of the question. This term is of key importance here. Because state-led turnarounds inevitably disrupt the usual hierarchies of accountability and power, they can get mired in turf battles. State boards can not only hold their own direct reports accountable—demanding honest answers about expectations and results—they can give district leaders and stakeholders a forum for input and advice. Periodically, they can also put tough questions to all parties: Is this thing working? Is it producing results for children? Is it being run in a fair and equitable manner?
ESSA does not provide a road map for turning around struggling schools, and that’s a good thing. It instead gives state boards multiple means to improve the lot of students who need better options fast, and do so in ways that fit unique local circumstances.
On this week’s podcast, Andy Rotherham, co-founder and partner at Bellwether Education Partners, joins Mike Petrilli and David Griffith to discuss how schools can prevent mass shootings without turning themselves into bunkers. On the Research Minute, Amber Northern examines how warm weather affects student learning.
Amber’s Research Minute
Joshua Goodman et al., “Heat and Learning,” National Bureau of Economic Research (May 2018).
If a little treatment goes a long way, does it stand to reason that more treatment will go even further? A research team led by Karen Bierman of Penn State University tested this idea, and their results were recently published in the journal JAMA Pediatrics. The treatment in question is the Research-Based and Developmentally Informed preschool classroom program (REDI-C), which researchers extended via a home-visiting program designed to train parents as surrogate teachers (REDI-P).
To carry out the study, researchers recruited the families of 200 four-year-olds participating in Head Start programs across three counties in Pennsylvania. Fifty-five percent of the children were white, 26 percent were black, and 19 percent were Latino. Most primary caregivers were mothers (89 percent), roughly one in three were single parents, and slightly more than half were unemployed. Almost all participating families were living in poverty with a median income of $18,000 per year.
Students were randomly assigned to the treatment group—those whose families would receive the in-home intervention—or the control group. Control families received only a packet of math learning games in the mail, while the other parents were visited ten times during their child’s last preschool year and six times during kindergarten. These home visits included extensive in-person coaching to enhance parent-child relationships and demonstrations of home learning activities to support child development and school readiness, especially language, literacy, and social emotional skills. Sample activities included storybooks, conversation games, and pretend play like “restaurant” in which children practiced their letters and language skills as they took their parent’s orders and mixed up alphabet soup.
Researchers first evaluated children’s outcomes at the end of kindergarten. Academic achievement was gauged in two ways: performance on a sight word fluency test and a teacher completed assessment of student competence in math and reading. Social-emotional adjustment was measured using a test of students’ pragmatic judgment skills administered by research assistants. Those research assistants then provided their own assessments of students’ attention, impulse control, and mastery motivation based on their experience administering the pragmatic judgment test. Home problems were measured using three parental survey instruments (with names like “Parenting Daily Hassles Scale” and “Parent Stress Index”) reflecting emotional symptoms, peer problems, conduct problems, and hyperactivity among other areas of potential difficulty. Finally, teachers rated students’ need for and use of services at school—such as IEP use, speech-language services, taking medication for attention or behavior, mental health counseling, and behavioral support use.
Researchers found that, consistent with their previous studies, students who received the REDI-P treatment at home scored significantly better on all measures at the end of kindergarten than did students receiving only REDI-C and the math-games packet. To further their research and to test the longer-term effects of the home-visiting intervention, Bierman and team followed up with students at the end of their third grade year, administering assessments again at that time. They found strong evidence of a lasting benefit of the treatment in all assessed areas, including indicators of both academic and social-emotional well-being.
Is this proof that home visits can lead to sustained positive effects? Up to a point, perhaps. But a couple caveats should be kept in mind. First, the academic test used for kindergartners and third graders here is the same. While this may be helpful for researchers to compare data more easily, sight word identification is not the standard for third grade reading proficiency. It seems odd that the state’s third grade test data were not available to the researchers. But if there are third graders performing poorly on kindergarten-level tests, it is likely that their third grade performance was poor as well. However, it does not seem likely that lack of pre-K programing exposure would be the cause. Second, the adult-reported assessments of both academics and social-emotional adjustment in class and at home lean heavily toward compliance and rule following. It stands to reason that obedient children would be less likely to raise hackles for teachers or parents, but what bearing does obedience have on academic achievement? The extent to which the social-emotional assessments are measuring self-motivation and executive function versus rule following and lack of defiance is an open question. The evidence here seems to suggest mainly that extending Head Start programing into the home goes a long way to benefiting adults. That also seems to be the point.
SOURCE: Karen L. Bierman, et al, “Effect of Preschool Home Visiting on School Readiness and Need for Services in Elementary School: A Randomized Clinical Trial,” JAMA Pediatrics (June 2018).
As school districts across the nation realize that one-size-fits-all models are outdated, interest is growing in portfolio style districts made up of high-quality, diverse, and autonomous public schools. One such district is Denver Public Schools (DPS). In a recent report, A+ Colorado, a nonprofit education advocacy organization, took a closer look at how diverse the district’s options are, examining whether its portfolio model provides an equal variety of options for students in all areas of the city. The report also examines option diversity by grade band, and considers whether specific school models correlate with higher or lower performance. Though specific to Denver, the analysis serves as a good model for other states and districts that wish to examine the diversity of their school portfolios, as well as locales looking to increase the diversity of their school offerings.
The data examined for the study was publicly available from DPS. The materials included 2018 enrollment guides, including enrollment projections; school websites; and the district’s new School Finder tool. Researchers used the Shannon-Wiener Diversity Index to give a quantitative value to relative diversity of a given community by calculating a maximum possible level of diversity and comparing it to the actual level. Most commonly used in biological studies, the index as used in the report provides a look at both the number of school models (“richness”) and how evenly available seats are distributed across the various models (“evenness”). School models were categorized by thirteen different tags, with some schools fitting into multiple categories. Researchers used up to three labels to classify each school. For example, a school might be both “college prep,” having an explicit focus on preparing students for four-year colleges, and “single gender.”
The study found that each region offers a unique set of options for students, but some regions have more diverse offerings than others, and no one region has seats in every school model. The Northwest region is the most diverse, while the least diverse regions are the Southeast and Southwest; every region has seats in at least ten of the thirteen school types. School types are distributed unequally, meaning some students cannot attend the schools that most align with their needs and interests. For example, the Near Northeast region has no single-gender-based or early-education schools, while in the Northwest region there are no schools that emphasize the arts, and only 9 percent of schools in the Southeast are labeled “college prep.” While the majority of seats in DPS are considered comprehensive, fewer than one in ten comprehensive seats are in the Far Northeast, while over a quarter of comprehensive seats are in the Near Northeast and Southeast regions. The diversity of the school models also varies across grade bands; many models are not represented at the middle or high school level.
Results also show that no one model is consistently high-quality. Out of the thirteen models analyzed, only seven have over 50 percent quality seats, as defined by the 2016 DPS School Performance Framework, and in six of the models, including comprehensive schools (which comprise the majority of DPS seats), fewer than half of the seats are high quality. Quality also varies by grade band: In early education models, 90 percent of the seats are high quality.
One limitation of the report is that school location does not match perfectly with accessibility. Students may have access to the school of their choice in a different region, and researchers mention that their measure of “available seats” does not mean seats are accessible by all nearby students, as some schools have admission requirements or a waitlist. Additionally, A+ Colorado defined these schools into thirteen categories, but note that these categories are not inclusive of all of schools’ unique characteristics. This means that “richness” is under-reported across the school district.
Although DPS schools are diverse across the district, there is still room for more improvement. Further research on this topic should consider the socioeconomic statuses of students in the schools; doing so would allow researchers to see if social class has an effect on where these schools are placed. DPS’s goal for 2020 is that at least 80 percent of students should be attending high-performing schools. For this to happen, it must approve new schools in a manner that increases diversity. And this analysis ought to help.
SOURCE: “Unequal Choices: School Model Diversity in Denver Public Schools,” A+ Colorado (May 2018).