A new edited volume, “Follow the Science to School,” aims to identify what science tells us about evidence-based practices in elementary schools, and describes what they look like in the real world of classrooms. Following the science into its application in this way—and sharing how it works on the ground—enables us to suggest workable answers to key questions rather than challenging every teacher, school, or district, to figure out those answers on their own.
This week, John Catt USA published Follow the Science to School: Evidence-based Practices for Elementary Education, a new book edited by Barbara Davidson, Kathleen Carroll, and Fordham Institute president Michael J. Petrilli. The following editorial is adapted from its introduction.
“Follow the Science”
Those three words became a rallying cry during the Covid pandemic. And at first blush, the message seems straightforward: Identify best practices according to the evidence, and then do them.
But it’s not quite that simple in the real world, whether the subject is how to respond to a deadly virus, how to educate our children, or how to do both at the same time.
The first challenge is defining “the science.” As Jonathan Rauch of the Brookings Institution explains in his recent book, The Constitution of Knowledge, science is no stationary thing. It is constantly changing, as researchers publish fresh studies, new bits of knowledge are added to the old, and old understandings are overtaken by new ones. We certainly saw this with the pandemic, as we learned over time how the novel coronavirus was transmitted (by air, not on surfaces), and thus how best to mitigate its spread (by avoiding poorly ventilated and crowded spaces, not by deep cleaning).
It’s also clear that “the science” is constantly contested, both by scientists and the larger public. That’s actually the essence of science: No one gets the exclusive right to claim what the evidence says, much less a permanent right to do so. As Rauch argues, the scientific endeavor is, by its very nature, a social exercise. It’s a process; a conversation. A community of informed individuals argues and debates until something approaching a consensus emerges, and then they start again. That’s why we need to “follow” the science—it keeps moving, it’s never static.
“Following the science” is seldom easy, especially in the realm of education. What science did we follow when we shifted American schools to remote learning in March 2020 and, for many schools, continued that well into the 2020–21 school year? While keeping kids at home reduced the spread of Covid, this decision also had an enormous negative impact on many students’ academic achievement and mental health. As experiences mount and evidence grows, our understanding grows more complex. Following the science involves tradeoffs, value judgments, and the complications, compromises, and tough choices that are endemic in the real world.
Yet for all its limitations and complexities, “following the science” is one of the primary ways that we humans have made progress over the centuries. It has allowed us to solve problems at a global scale and brought us lifesaving vaccines to counter Covid in less than a year. It is why drinking water in the developed world is almost always clean, why farms can feed billions, why infant mortality has plummeted, why life spans have expanded in ways that our ancestors could have never imagined, and it is why we live with advanced technologies and living standards for ordinary folk that surpass those of kings and queens of yesteryear.
“Following the science” is also one of the primary ways that we can improve our schools so that all American children finally gain the opportunity to fulfill their potential and thrive in the world they will inherit. The urgency of this goal only grew during the pandemic, which cruelly disrupted the home and school lives of children around the world.
Our hope with our new book is to identify what the science tells us about evidence-based practices in elementary schools, and to describe what they look like in the real world of classrooms. Following the science into its application in this way—and sharing how it works on the ground—enables us to suggest workable answers to key questions, rather than challenging every teacher, school, or district to figure out those answers on their own.
We’re talking here about the fundamental questions of elementary education, such as:
- How can young children make sense of the code that is the alphabet? How can we help them move smoothly from sounding out words to reading fluently and confidently?
- How does “reading comprehension” develop? Is it a skill to be learned? Or is it more like a process—driven by how much students know about the world via subjects like history and geography and science?
- How can elementary students be taught to write effectively? Should we worry about spelling, grammar, and punctuation right away, or can that come later? How can we teach children to write strong sentences, paragraphs, and essays?
- What about math? Should we simply teach kids that 9 + 6 = 15—memorize it now!—or is there a phase when it’s better to show them various strategies to figure out and understand why 9 and 6 add up to 15? Are there some ways to teach fractions that work better than others?
- Should students practice reading skills with books at their current reading level or at one that corresponds to their assigned school grade (and above)?
- How should teachers manage their classrooms? What’s the best way to keep an orderly, yet friendly, environment?
Let’s get one thing clear right away: Not everything that makes an elementary school great can be pinned to “evidence.” Skillful teaching and inspired leadership are each an art and a science. And science can’t always give us a single strong answer to every question.
But it often does. The science is out there. The evidence can point the way. And there are good approaches to meeting the challenges that thousands of teachers and students encounter every day. When the adults in charge ask for evidence and put it into practice, we can do better by our students tomorrow than we did yesterday.
What counts as “evidence”?
Yes, it’s a genuine challenge. If we set our evidentiary standards impossibly high and look only to gold-standard experimental studies, we risk limiting ourselves to questions that lend themselves to randomization or to practices, tools, and materials that have been on the market long enough (and have enough financial backing) to allow for robust, expensive evaluations. But if we set our standards too low, we risk encouraging practices that may look “promising” but might be ineffective—or even harmful.
Our approach is to turn to the core tenets of science that have served us well, in all fields, for hundreds of years. In Rauch’s words, those “rules for reality” are:
- The fallibilist rule: No one gets the final say. You may claim that a statement is established as knowledge only if it can be debunked, in principle, and only insofar as it withstands attempts to debunk it.
- The empirical rule: No one has personal authority. You may claim that a statement has been established as knowledge only insofar as the method used to check it gives the same result regardless of the identity of the checker, and regardless of the source of the statement.
These rules mean that people can’t just make stuff up or claim that certain practices are “evidence-based” just because someone said they are. But they also leave room for many different approaches to identify plausible practices and test them against the rigors of the real world. And they certainly mean going beyond randomized experiments.
For example, in the book, we point to rigorous analyses that examine whether particular instructional materials are faithful to evidence-based practices that themselves have been validated by experimental studies. This approach is several steps removed from subjecting the instructional materials themselves to controlled experiments, but we think that’s OK.
Likewise, we laud rigorous attempts to chronicle what high-performing schools and highly effective teachers do in their classrooms, most famously the work of Doug Lemov and his colleagues on the Teach Like a Champion team.
We believe this approach is in line with the one embraced by Congress in the Every Student Succeeds Act of 2015. That law mentions the term “evidence-based practice” over one hundred times, and defines the term via four tiers: strong, moderate, promising, and “under evaluation.”
Curriculum is key
A constant drumbeat in the book is the importance of high-quality instructional materials. We do not believe it makes sense for each one of America’s teachers or principals, or even chief academic officers at the district or charter network level, to try to interpret the research evidence on their own. Such an approach would be isolating, time-consuming, costly, and inefficient.
Instead, we believe that instructional materials are the ideal vehicle for turning evidence into practice. Curriculum developers—with insights from academics and practitioners—should develop evidence-based resources designed for the reality of the classroom, so educators can put them to their best purpose. We don’t have to be computer engineers to use computers effectively in our daily work. Likewise, teachers shouldn’t have to earn doctorates in education research to teach children effectively. If developers do their job well, then educators can focus on mastering the curriculum, rather than learning every intricacy of the underlying research studies.
Simply put, we think it is almost impossible to be an evidence-based elementary school without the adoption and implementation of high-quality instructional materials. Educators need tools that are aligned with the research, and quality instructional materials are the most critical of those tools.
In the wake of two horrific years, even the best elementary schools are struggling to help their students make up for lost time, relearn forgotten skills, and regain their momentum. Readers will find lots of ideas in our book about how we can help young children recover, regain, and thrive. By following the science, educators can lead our schools into a hopeful post-pandemic era.
Those who pay attention to the “Nation’s Report Card” tend to take it for granted. In truth, most people heed it not at all. (I sometimes call it “the most important test you’ve never heard of.”) Because it’s a low-stakes operation that yields no data for individual students or schools and just a handful of big districts, the National Assessment of Educational Progress (NAEP) is easily ignored. And because it’s a federal program that’s been around for half a century, it’s sort of boring, even for NAEP-watchers. It seems to do the same thing over and over, and every couple of years it dutifully reports depressing results for states and nation. There may be a headline or two—more likely an article on page 7—and then it again recedes from view.
Well, brace yourself. Changes may be coming. I’d like to think they’ll result from the wise and insightful recommendations in my forthcoming book, but in fact the National Academies are taking the lead, at least temporarily, and NAEP’s own minders are nudging the future themselves.
This very day, March 24, 2022, the National Academies Press is releasing A Pragmatic Future for NAEP: Containing Costs and Updating Technologies. This eleven-part report, concluding with “A New Path for NAEP,” was commissioned by the Education Department’s Institute for Education Sciences (IES), which asked the Academies for “an expert panel to recommend innovations to improve the cost-effectiveness of NAEP while maintaining or improving its technical quality and the information it provides.” Headed by Karen J. Mitchell and containing a number of heavy hitters in the realm of testing and measurement, this eleven-member group took its charge seriously, and its twenty-one recommendations, if taken equally seriously, would result in big changes for NAEP, including considerable cost savings.
The Nation’s Report Card has grown awfully expensive. The panel pegs its total annual cost to taxpayers at $175 million, which may not sound like a lot in an era of trillion-dollar proposals (and deficits), but which works out to an estimated $438 per test-taker. NAEP, says the panel, is more expensive than PISA, far more expensive (per student) than state testing programs, and several times pricier (per test-taker) than high-stakes exams such as the SAT and GRE, though those tests are typically far more extensive.
The panel was frustrated by the difficulty of obtaining accurate data on the costs of NAEP’s many moving parts, and its first recommendation is that the two entities responsible for the Nation’s Report Card—the National Center for Education Statistics (NCES) and the National Assessment Governing Board (NAGB)—should “develop clear, consistent, and complete descriptions of current spending on the major components of NAEP,” and that these be used going forward “to inform major decisions about the program to ensure that their long-term budgetary impact is supportable.” Indeed, one panel recommendation, stemming from the seeming opacity of NAEP’s budget, is that a full-fledged audit be undertaken.
But containing costs is just the beginning. The panel would also change how achievement trends are monitored and reported; would integrate (and slightly lengthen) assessments so that a student might, for instance, take a combined test of history, civics, and geography (or reading and writing); would modernize how test items are structured and created and how tests are scored (including much heavier use of technology); would alter test framework development and test administration in major ways; and would develop a “next-generation technology platform” for the entire venture.
The report bristles with “should dos” for NCES and NAGB, and it’s unknowable whether they’ll have the appetite and horsepower to undertake all these assignments. But a thoughtful article last week by NCES commissioner Peggy Carr and NAGB staff director Lesley Muldoon sketched some changes they’re already making in the assessment, and NAGB has also signaled its intent to convert NAEP to remote administration. But nothing now underway comes close to the overhaul framed by the Academies’ panel.
Yet wide-ranging and far-reaching as that group went, it avoided some key issues facing NAEP. (See my book!) Trying to sidestep political hot potatoes, it did not, for example, address the fact that much of NAEP’s high cost is due to Congress’s post-NCLB mandate to test reading and math at the national and state levels every two years, never mind that this interval is too short to reveal major changes in achievement. Nor did it go near what I view as NAEP’s single greatest current failing, namely its lack of state-level achievement data at the end of high school. (The Congressional mandate extends only to grades four and eight.) Others, including my Fordham colleague Mike Petrilli, think NAEP should commence in kindergarten.
There’s more. After reading the report, another veteran NAEP-watcher expressed disappointment “that the panelists didn’t address the organizational and process constraints on carrying out many of their recommendations. For example, how will the contracting process need to be modified if NCES is going to have more qualified providers? On the NAGB side, what should be done to rein in processes that are now so heavily laden with advisory panels, partnerships, consultations, and discussions that changing anything of substance is at minimum an eight-year task?”
The risk, as always, with reports like this is that they sit on a shelf and nobody does anything. In NAEP’s case, this risk is compounded by the complicated—often collegial but sometimes rivalrous—relationship between NCES and NAGB, as well as the fact that the NAEP budget is set far away from both, the fact that contracting is handled by a different unit of the Education Department, and the fact that Congress both micromanages NAEP and neglects it. (The main NAEP statute hasn’t been touched for decades.) The schisms and divisiveness that plague Capitol Hill have also begun to seep into NAGB itself, as was visible in last year’s fracas over the new NAEP reading framework, and may recur as the Board tackles the next science framework. Note, though, that culture wars over frameworks—and risks to the NAEP trendline—would ease if NAGB follows the panel’s recommendation to make its framework updates “smaller and more frequent.”
NAEP has become the country’s most important and respected gauge of student achievement, of changes over time in that achievement, and of gaps in that achievement. It’s an essential tool for pursuing both excellence and equity in American K–12 education. Its achievement levels are the closest the U.S. has ever come to national education standards. It could fairly be termed indispensable.
Yet it’s also costly, creaky, sluggish, and in many respects, archaic. The National Academies’ panel has gone a fair distance in pointing toward a nimbler, more efficient, and more productive assessment. I wish it had gone farther. But now the big question is how successfully and willingly NAEP’s minders will set about to change the stale bathwater without harming their cherished baby.
There is much to love in George Packer’s essay on the culture wars and education in The Atlantic. He castigates both sides of the partisan aisle for their follies: the left’s support for school closures “far longer than either the science or welfare of children justified” and the right’s employment of overly-vague terminology in its attempts to constrain school curricula.
He also criticizes the hype over the latest pedagogical fads—I’m looking at you, “ungrading”—which often amount to educational experiments with children as the guinea pigs. And his paean to old books at the end of the essay makes this English teacher’s heart go pitter-pat.
That being said, both the means and ends that he proposes are flawed. Let me explain.
First, the ends. He envisions an education system wiped of any “partisan scrum,” one where democratic citizens “know how to make decisions together.” Appealing as it seems, this vision of a kumbaya system is unrealistic. He appears to conceive of democracy as a conflict-free zone in which we come together, talk, listen, compromise, and make decisions.
Traditionally understood, though, democracies are raucous, argumentative affairs. In Federalist 10, Madison acknowledged that the causes of factions and conflict “are sown into the nature of man.” Bring together farmers, artisans, businessmen, teachers, and countless other professions with varying religions, family structures, and values, and disagreement will arise. We get Thomas Jefferson calling John Adams a “hideous hermaphrodite” and Benjamin Franklin called him “a ruffian deserving the curses of mankind.” Why would we expect our institutions of public education to function differently?
For example, amid partisan squabbles over book lists, Packer suggests that our students ought to encounter both To Kill a Mockingbird and Beloved. Of course I think they should read both great works, yet reality often stomps on abstract ideals. And curricula can only include so much. It’s like space on your dinner plate. The inclusion of one book (or vegetable) necessitates the exclusion of another. Discussing one historical event means slighting another. Everyone can never get all that they want, and so disagreement, anger even, is inevitable.
To the founders, democracy looked less like graceful adjudication and more like boisterous townhalls. While we must condemn any excess—violence and threats, for example—when I see parents speaking before school boards, I see democracy at work. When I see competing editorials across publications, I again see democracy at work.
There are ways to tone down the heat. The local nature of U.S. public schooling helps. It’s easier to get a small community relatively pleased with a curriculum or instructional methodology than a nation of more than 300 million people.
Expanded school choice laws could also dampen the vitriol. When all members of a school share at least the outline of a vision—be it classical, progressive, critical, religious, or otherwise—there’s commonality of first principles. That facilitates compromise. Even so, if the average married couple occasionally disagrees over where to eat dinner, we should expect and even welcome civil—or even heated—disagreements over education.
And now, the means. To avoid the battles, Packer gestures at a sort of third way. Instead of bickering over what events to learn about or how to frame our history, he recommends that we instead teach our students how to think like historians, analyze documents, and apply the skills of criticism.
Intentionally or not, he thereby invokes the progressive education of John Dewey who argued that no content is in itself worth learning. To Dewey, content was only the means by which students master academic and intellectual skills. Politically, this is appealing, as it allows any polemicist to maintain a commitment to rigorous academic standards without getting into the muckiness of debating what students ought to learn. In a similar vein, Packer suggests that educators teach “the ability to read closely, think critically, evaluate sources, corroborate accounts, and back up their claims with evidence from original documents.” He emphasizes skills over content.
In reality, however, as we know from Willingham, Hirsch, and many more, content is essential. Every attempt to skirt around the tough content choices by constructing curricula with a skills-approach leaves our education vacuous. Consider the concept of “thinking like a historian.” What does that mean? When real historians encounter a new primary source document, they don’t necessarily bring to that text some occult set of expert skills. Rather, they bring to bear a wealth of prior historical knowledge: How does it compare to other texts of the era? Does it reveal any new information not already in the literature? What other events were going on at the time that might have influenced its writing?
Reading depends on knowledge. A student’s score on a reading test depends far more on their knowledge of the topic in question than their predetermined reading level. Give even a struggling reader a passage on something they know lots about—say, baseball—and they’ll outperform even the so-called “strongest” readers if the strongest readers know little about the sport. If a student wanted to read something like the 1619 Project with any meaningful comprehension or analysis, they need factual, historical knowledge. What are the Constitution and the Declaration of Independence? What do they say? What was chattel slavery and the Atlantic passage?
The literary texts, facts, historical events, and scientific theories that we place in our curricula are essential for learning. E.D. Hirsch has spent decades meticulously detailing the countless schools, districts, and even entire countries that built a successful education system upon a robust, knowledge-based, core curriculum. In reality, in trying to avoid the culture war with his skills-focused approach, Packer ends up recommending academic mediocrity in its stead.
Maybe instead of casting off this debate as just “culture warring” or politicking, perhaps we can see how it connects to a centuries-old debate. Plato, Aristotle, Rousseau, Locke, and countless others spent much ink in discussion of what and how our kids ought to learn. Debates over curriculum are not only inevitable; they are necessary.
Not all college majors are created alike, but it turns out that employers want their new hires to exhibit many of same skills regardless of what they major in. A recent study examines online job ads as a proxy for what employers view as the skills inherent in various college majors. Specifically, researchers look at how requested skills in ads are similar across majors and how differences in skills profiles might explain differences in wages. The idea is that a clearer understanding of how skill sets differ across fields could equip higher education to produce graduates better able to meet the specific demands of local employers.
Analysts measured the skills that employers associate with particular majors using job vacancy data obtained from Burning Glass Technologies, a company that collects almost all job ads in order to create “analytic products” for the labor market. The data include information not only on majors but also on skills, work locations, and occupational details, which enable researchers to link skills and majors at the individual job level and account for within-occupation variation in skill demand, which may be correlated with college majors. The final sample uses online ads from 2010 to 2018 and focuses on those that request a bachelor’s degree and list at least one skill and one major—which leaves them with about 18.5 million unique job ads. They use the Classification of Instructional Programs from the National Center for Education Statistics to code majors into broader categories.
Five majors appear in at least 10 percent of postings, including both business and computer and information sciences, which are listed on 29 percent and 26 percent of unique job postings, respectively. The least frequently demanded majors include theology, philosophy and religion, atmospheric sciences and meteorology, other physical sciences, library science, visual and performing arts, and protective services. Next, the researchers looked at whether the distribution of majors maps to the distribution of degrees granted. They find that two majors—nursing and economics—exhibit employer demand (via their representation in online ads) proportional to the number of degrees awarded. However, more demands for engineering and statistics majors appear in employer ads than the proportion of degrees granted in those areas, while philosophy and religion and English majors show just the opposite (more degrees and
/less demand, as reflected in online ads).
Next they find that the bucket of what are often termed cognitive skills—like problem solving and critical thinking—appears in more than three quarters of all job ads. In contrast, supervising and directing people and writing skills are least likely to appear in ads.
Finally, using data from the American Community Survey, the researchers looked at average earnings by major across metro areas for employees between the ages of twenty-five and fifty-four with bachelor’s degrees. They find substantial geographic variation both across and within majors in mean hourly wages. After accounting for major and geographic location, there’s still little alignment between the demand for particular skills and earnings by major. This suggests to the researchers that “majors can generally be conceptualized as bundles of aggregate skills that are fairly portable across areas in ways that occupations are not.”
The researchers conclude that a finer-grained categorization of the skills that make up successful completion of a major is needed. If so, research could better explain the observed wage variation within major and across place—and colleges could potentially do a better job supplying the knowledge and skills needed for particular majors, with students reaping the rewards in the local labor market.
All of that sounds good in theory, but our current education and workforce systems have always been deeply fragmented. It’s overly optimistic to suggest that a finer categorization of skills could initiate seamless alignment between them. It can’t hurt to try, but so much more is involved relative to how a formal education, or lack thereof, intersects with a career. An individual’s talents, passions, drive, and experiences, for starters, can play critical roles. If you need an inspiring reminder of that, look no further than this based-on-a-true-story movie.
SOURCE: Steven W. Hemelt et. al., “College Majors and Skills: Evidence from the Universe of Online Jobs Ads,” NBER Working Papers (December 2021).
The typical timeline for college-bound high school seniors is to start a few months after graduation—the first available opportunity. But is that unbroken path into college the right move for everyone? New research suggests that academic breaks after high school have both short- and long-term impacts on postsecondary enrollment and labor market outcomes. What those impacts are, however, and whether they’re positive or negative, depends on students’ academic readiness for college.
Researchers Nicolás de Roux and Evan Riehl from Bogota’s Universidad de los Andes and Cornell University, respectively, take advantage of a natural experiment using data from various regions of Colombia. While the majority of high schools in the country begin their academic years in January, approximately 500 in two regions traditionally operated on a schedule that began in September. Between 2008 and 2010, a majority of those schools transitioned to the January calendar to align with the rest of the country. The transition was gradual, occurring over two school years. As a result, graduation day for more than 26,000 high schoolers in 2009 was delayed by two months—too late for the September college admission window that they traditionally used and too early for the January window used by graduates in the rest of the country. Thus, even students who were qualified and wanted to attend college were forced to delay enrollment for three months longer than usual.
De Roux and Riehl compare these delayed enrollees’ postsecondary enrollment patterns and labor market outcomes with 2009 graduates in other parts of Colombia who were able to move immediately on to college and with regional peers who remained on the old schedule.
The lead finding was that the calendar shift reduced the number of students who started college at the first opportunity following graduation, even though the break was relatively short and had been anticipated. Relative to comparison schools, the immediate college enrollment rate fell by about 5 percentage points among all schools in the regions where the calendar switch occurred.
Only about half the students who were forced to delay entry went on to enroll in college at all, while others delayed for nearly two years. That is not unheard of in Colombia, the researchers note, but the magnitude of delayers—and those who never enroll—in these regions was far higher than anywhere else historically. The enforced break seems to be the culprit, although why an additional three months should have this dramatic an impact is unable to be clarified.
College persistence, interestingly, did not seem to change from historic rates despite this huge drop-off in attendance, suggesting that students who did not enroll due to the break were highly likely to have dropped out before completing had they attended.
On the labor market side and at seven years post-graduation, de Roux and Riehl observed very little difference in the mean monthly earnings of graduates from schools who switched calendars compared to peers in regions whose schools were always on that calendar. In other words, despite the larger number of no-shows in college, future earnings for the cohort affected by the schedule change were largely the same as for cohorts whose academic journeys were not interrupted. This adds further credence to the hypothesis that many of the delayers, or no-shows, would have dropped out anyway.
However, graduates in regions where most schools switched to the new calendar but their specific schools stayed on the old calendar experienced a 5 percent reduction in mean monthly earnings, as compared to peers in those other regions. This indicated to the researchers that these students were harmed by the break and the reduced likelihood of attending college. The schools that stayed on the old schedule were mostly private and their students came from higher economic status and had higher exit exam scores than the switching schools, leading de Roux and Riehl to investigate academic preparation as a possible mechanism driving the outcomes observed.
Adjusting for exit exam scores, students in the affected regions who were better prepared for college experienced more negative returns by forgoing college than did those who were less-prepared. That is, students who had been planning to go to college straight from high school—and were most academically ready—experienced more harm from the calendar switch. About half went to college eventually, but with a delay, and the other half moved on with their lives without the traditional postsecondary sojourn. Long-term earnings suffered as a result. Those who were less academically ready—but were planning for immediate college enrollment out of tradition or habit—were likely helped by the delay. About half of those students went immediately into the workforce instead, ending up in the same place as their no-college peers but without the expense of tuition and the unproductive time spent on a college path that would end up producing no degree.
All of this combines to reinforce the idea that giving college the “old college try” is not the best choice for everyone. Completion of a postsecondary degree is what matters most, and preparation is key to completion. Those who are ready seem to benefit from an uninterrupted academic journey; those who are not are likely to benefit from a breather, a rethink, or some expert advice with lots of options (especially work options) in mind.
SOURCE: Nicolás de Roux and Evan Riehl, “Disrupted academic careers: The returns to time off after high school,” Journal of Development Economics (February 2022).
On this week’s Education Gadfly Show podcast, Mike Petrilli, David Griffith, and Victoria McDougald discuss Follow the Science to School: Evidence-based Practices for Elementary Education, a new book that Mike edited with Kathleen Carroll and Barbara Davidson. They talk about the promise of evidence-based practices, the importance of elementary education, and the centrality of high-quality instructional materials. Then, on the Research Minute, Amber Northern examines a study on how employment during high school impacts student outcomes.
- Mike’s book, co-edited with Kathleen Carroll and Barbara Davidson: Follow the Science to School: Evidence-based Practices for Elementary Education.
- Mike’s pieces from previous years addressing elementary education and the importance of research-based practices: “An ode to elementary schools” and “Can evidence improve America's schools?”
- The study that Amber reviewed on the Research Minute: Rune Vammen Lesner et al., “The Effect of School-Year Employment on Cognitive Skills, Risky Behavior, and Educational Achievement,” Economics of Education Review (March 2022).
Have ideas or feedback on our podcast? Send them to our podcast producer Pedro Enamorado at [email protected].
- “AP takes a stand amid the raging curriculum debates. It’s worth emulating.” —Rick Hess
- Asian parents want what charter schools offer: an emphasis on values and good results for students. —Wai Wah Chin
- As more students return to in-person schooling, test scores are rising. —Wall Street Journal
- An Illinois bill that would prevent testing students in pre-K through second grade cleared committee and is headed to the state house and senate. —Chalkbeat
- Russia is now giving “patriotic” lessons in schools to spread anti-Ukraine propaganda. —Washington Post
- “[Denver Public Schools] is attacking the innovations that make schools excel.” —Denver Post
- California has revised its math framework, leaving some course-offering decisions to local districts to mitigate backlash its last state-level decision caused. —EdSource
- Polling data suggests that some of the animus in the education culture wars is driven by people who don’t have children enrolled in public schools. —Jessica Grose