Folks who have “tutoring” as the hoped-for winning square on their post-Covid bingo card will want to pay close attention to a recent report detailing a field experiment in virtual tutoring. A group of researchers led by Sally Sadoff of the University of California San Diego created the pilot program and tested its efficacy via a controlled experiment. While the price tag was low, the academic boost observed was negligible, and any effort to scale it up will require much that’s still in short supply more than two years into the pandemic.
In the experiment, sixth- to eighth-grade students from one middle school in Chicago Heights, Illinois, were randomly selected to receive virtual tutoring from college students via the CovEducation (CovEd) program. CovEd was created during the early days of the pandemic in 2020, intended to support K–12 students in need of academic and social-emotional support. Tutors were college students, specifically recruited from “top-tier research universities,” who worked as volunteers (hence the low cost of the effort). A total of 230 tutors participated in the pilot from forty-seven institutions. About three-fourths of them were women; 40 percent were White, 34 percent were Asian, 20 percent were Hispanic, and 5 percent were Black. About 70 percent were science or engineering majors. Prior to the start of the program (March 2021), CovEd provided tutors with a three-hour training on pedagogical techniques, relationship building, and educational resources. During the program, CovEd offered tutors weekly peer mentoring sessions to troubleshoot challenges, share best practices, and build community.
Nearly 100 percent of Chicago Heights Middle School (CHMS) students were from low-income households; almost two-thirds were Hispanic, and just less than one-third were Black. Prior to the pandemic, just one quarter of students in the school were meeting grade level standards in math and reading. In February 2021, CMHS students were offered a choice to remain fully remote or participate in a hybrid model where they attend school in person part-time and learn remotely part-time. Only students who chose the hybrid option (approximately 58 percent of the total school population) were eligible to participate in the pilot. A total of 560 students were involved in the experiment, randomly assigned to either treatment (264) or control groups (296). Control group students participated in regular advisory period activities whereas treatment group students received tutoring. No demographic breakdown of students in each group was provided in the report. Treatment students were offered thirty minutes of one-on-one virtual tutoring twice per week—one virtual session while in person at school and another while remote at home. Thus, over the twelve-week program, students had a maximum of nine hours during which to attend tutoring. However, students were added to the treatment group in three waves, and those who joined in later weeks had fewer participation opportunities available.
Glitches occurred right off the bat. Eighteen percent of those assigned to the treatment group did not attend a single minute of tutoring. On average, participants attended just 3.1 hours. Low take-up was ascribed to weak overall attendance during this “transitional period” from fully-remote to hybrid learning, a common story in schools across the country. Student achievement in math was measured via the Illinois Assessment of Readiness test—the state’s standardized test—and in reading via iReady, a formative assessment used to track student progress. Bottom line: The pilot program produced consistently positive but very small and statistically insignificant effects on student achievement on both tests compared to the control group.
The researchers run intent-to-treat models and report that if more students had spent more time on tutoring, they would likely have done even better on the tests. Even though this seems somewhat obvious—the same could likely be said of attending more actual school—they use it as the basis for their contention that the tutoring program could be scaled to produce stronger academic outcomes. However, it must be noted that there was an extensive social-emotional aspect to the pilot program that is entirely undocumented here. Having no data on how much time was actually spent on academic support in each session versus building relationships or other SEL priorities muddies the water in terms of ROI.
Those who led the pilot remain optimistic about scaling it up, but that won’t be easy. The supply of free college student labor is finite, and even more finite when limited to research universities. And ardor for the remote option utilized here will wane as the “old normal” reasserts itself. And student absenteeism—an ongoing problem with regular school—will continue to dog any voluntary program such as this one. It seems likely that the kids who might have the greatest need of tutoring are most apt to skip, or be unable to participate in, those sessions. All of these together could serve to keep the tiny academic effects seen in the pilot at exactly the same level—at a higher cost—in a larger effort.
SOURCE: Matthew Kraft, John List, Jeffrey Livingston, and Sally Sadoff, “Online Tutoring by College Volunteers: Experimental Evidence from a Pilot Program,” The Field Experiments Website (February 2022).