Skip to main content

Mobile Navigation

  • National
    • Policy
      • High Expectations
      • Quality Choices
      • Personalized Pathways
    • Research
    • Commentary
      • Gadfly Newsletter
      • Gadfly Podcast
      • Flypaper Blog
      • Events
    • Covid-19
    • Scholars Program
  • Ohio
    • Policy
      • Priorities
      • Media & Testimony
    • Research
    • Commentary
      • Ohio Education Gadfly Biweekly
      • Ohio Gadfly Daily
  • Charter Authorizing
    • Application
    • Sponsored Schools
    • Resources
    • Our Work in Dayton
  • About
    • Mission
    • Board
    • Staff
    • Career
Home
Home
Advancing Educational Excellence

Main Navigation

  • National
  • Ohio
  • Charter Authorizing
  • About

National Menu

  • Policy
    • High Expectations
    • Quality Choices
    • Personalized Pathways
  • Research
  • Commentary
    • Gadfly Newsletter
    • Flypaper Blog
    • Gadfly Podcast
    • Events
  • COVID-19
  • Scholars Program
Flypaper

Vaccine-making’s lessons for high-dosage tutoring: A respectful disagreement about research

Michael Goldstein Bowen Paulle
12.11.2020
Getty Images/monkeybusinessimages

Editor’s note: This is the fourth post in a five-part series about how to effectively scale-up high-dosage tutoring. Read parts one, two, three, and five.

Bob Slavin, a wonderful researcher, has written some hard truths about Covid-19 learning loss. He correctly dismisses policy interventions like extending the school year, typical after school programs, and summer school. Those won’t work. Bob believes tutoring, however, will work:

By far the most effective approach for students struggling in reading or mathematics is tutoring (see blogs here, here, and here). Outcomes for one-to-one or one-to-small group tutoring average +0.20 to +0.30 in both reading and mathematics, and there are several particular programs that routinely report outcomes of +0.40 or more. Using teaching assistants with college degrees as tutors can make tutoring very cost-effective, especially in small-group programs.

Effect sizes are a wonky way to describe impact. Our friend Matt Kraft, for example, writes that a 0.20 standard deviation effect in education is large.

Philip Oreopoulos and his colleagues agree with Bob. They recently published a meta-analysis, with very large positive effects—0.37 standard deviations—across scores of tutoring programs, some “high dosage” and some not.

And Dietrichson et al. offer the following chart in favor of tutoring, from 2017:

2

Clearly, there is reason for scholars to be optimistic about tutoring. And we are pleased that the programs with which we’ve been affiliated are included in that positive research. We’re glad that Experience Corps, Reading Partners, and the i3 study of Reading Recovery had fantastic quality randomized control trials, and that impressive evidence showed gains for students.

So why worry?

We worry because we believe that many more tutoring programs fail than is commonly believed. There are two key drivers of that belief: publication bias and scale-up problems.

Publication bias

This is a broad problem that doesn’t just affect tutoring. Many failed education programs never stick around long enough to get measured. For example, when we scaled high-dosage tutoring from Boston to Houston, it worked. But missing from the story: The Austin Public School system, also in Texas, tried to create its own version of HDT around that time. It quickly failed and disappeared without a trace. That’s not included in “the research.”

When Saga brought HDT from Houston to Chicago, their program succeeded. But at the same time, another Chicago school network launched its own HDT program. That one died after nine months. That’s also not included in “the research.”

When Mike did HDT in our Boston charter school, we deployed literally those same tutors into nearby district schools. The charter students received large gains; the district students had no gains. Indeed, every time we’ve been part of a successful tutoring program, one which “enters the tutoring scholarly literature,” we’ve seen a similar program fail, yet disappear too fast to ever be captured.

Imagine a group of people who try an experimental drug, have bad outcomes, but nobody notices the bad ones, and only notice the good ones, so the drug overall seems successful. (You don’t have to imagine too hard: hydroxycholoroquine for Covid-19).

There are presumably hundreds of such tutoring failures. We believe the positive research story omits these.

Scale up problems

With vaccines, the 1 millionth dose is identical to the 2 millionth dose. But that can’t happen with education programs delivered by human beings.

In particular, education programs that work small often don’t work when they get big. A famous example: Fifty-eight kids who benefitted from the oft-cited Perry Preschool Project in 1962 didn’t translate so well to 18,000 children in Tennessee or to a similar program in Quebec. For a more recent example: Last month our friend Ben Feit published a study finding that Texas charter schools did not scale up well.

Tutoring—not high dosage tutoring, but what we might call “regular tutoring”—already failed in a big national scale up: the George Bush/Ted Kennedy No Child Left Behind version. Again, it’s hard to get the details right.

Every education intervention, including tutoring, requires a newly formulated, carefully calibrated program. It’s details ought to depend on which students it serves, whether it’s optional or mandatory, which tutors it uses, what time of day its offered, whether it’s online or in person, what the tutor-student ratio is, what curriculum is used, what the leadership is like, the overall school culture, and more. There is no de-situated, de-contextualized “thing at rest,” no vaccine. Many have written about the need to consider if a program is “hard to scale”; Matt Kraft’s version is described here by Matt DiCarlo.

The evidence on tutoring is, we believe, choppier than we would have wished.

So what to do?

Tune in Monday.

Policy Priority:
High Expectations
Topics:
Curriculum & Instruction
Teachers & School Leaders

Mike Goldstein is the founder of Match Education in Boston: a college prep charter school for low-income kids; an embedded Graduate School of Education; and a program to share best practices.

Bowen Paulle is on the Faculty of Social and Behavioural Sciences at the University of Amsterdam.

Sign Up to Receive Fordham Updates

We'll send you quality research, commentary, analysis, and news on the education issues you care about.
Thank you for signing up!
Please check your email to confirm the subscription.

Related Content

view
Quality Choices

The Education Gadfly Show: The education issues facing state legislatures in 2021

Michael J. Petrilli, Patricia Levesque, David Griffith, Amber M. Northern, Ph.D. 1.21.2021
NationalThe Education Gadfly Show Podcast
view
High Expectations

What we're reading this week: January 21, 2021

The Education Gadfly 1.21.2021
NationalFlypaper
view
High Expectations

The pandemic dims a beacon of school improvement

Josh Boots 1.20.2021
NationalFlypaper
Fordham Logo

© 2020 The Thomas B. Fordham Institute
Privacy Policy
Usage Agreement

National

1016 16th St NW, 8th Floor 
Washington, DC 20036

202.223.5452

[email protected]

  • <
Ohio

100 E. Broad Street, Suite 2430
Columbus, OH 43215

614.223.1580

[email protected]

Sponsorship

130 West Second Street, Suite 410
Dayton, Ohio 45402

937.227.3368

[email protected]