Why do some students succeed and others lag behind? This is, of course, a central question in education policy. Since James Coleman’s massive 1966 report, “Equality of Educational Opportunity,” the debate has tended to center on whether the gaps we observe among student groups are more attributable to differences in school resources or family background factors. Yet in his seminal 1959 article “Academic Achievement and the Structure of Competition,” Coleman pointed his readers towards another important factor, more elusive and not obviously connected to either the resources of the school or family: the agency of the student herself.
Adolescents have their own interests and social norms, which influence how hard they work at their studies and what learning they achieve. In an important twist, Coleman argued that school policies—such as grading on a curve—could have an effect on student norms and thus influence student effort, albeit indirectly.
Using an experiment to see how much middle school students are willing to complete homework modules in return for cash incentives, a new NBER working paper aims to shed light on the extent to which student motivation drives student outcomes.
To investigate this question, the multinational group of university-based economists who conducted the research set up a website to administer math quizzes to fifth and sixth grade students in three Illinois districts across the socioeconomic spectrum. The website had eighty short math quizzes that the students could complete, and the students received cash incentives for each quiz completed, and students were allowed to take each quiz as many times as they wanted. The students had access to the website for ten days, and, crucially, the amounts of the incentives were randomized across students, meaning that some students were randomly offered more money than others. This randomization enables the researchers to estimate the effect of the incentive for different subgroups of students, for example, boys versus girls.
The researchers designed the experiment with the intention of disentangling the extent to which academic outcomes are attributable to student motivation versus “academic efficiency.” This distinction is useful. For example, we may hear that students who do not turn in their homework are lacking motivation, and of course, that may be true. Yet the authors point out that if you have two students with equal motivation but one can get the homework done in a short amount of time and the other cannot because the second student has less of this “academic efficiency,” the first student may be turning in the homework because the work can be completed in a reasonable amount of time for them, while the other student would spend the same amount of time without getting anywhere.
The experiment allows them to differentiate between these two constructs to some degree, since the researchers can estimate academic efficiency based on how much time it takes students to successfully complete the quizzes, while estimating a student’s “time preference” (or motivation) based on how he responds to the randomly-assigned incentives.
As we would expect, the researchers find that students with larger cash incentives tended to successfully complete more math quizzes. Among students who completed at least one quiz, the students in the highest incentive group ($1.25 per quiz) did an average of twenty-six of the eighty available quizzes, whereas those in the lowest incentive group ($0.75 per quiz) did just eighteen, on average. Those students in the high incentive group also spent an average of a minute more per quiz, presumably because they had greater incentives to be careful and to continue attempting the quizzes even if they failed the first time.
Yet these averages obscure substantial variation across students. Half of the students never completed a single math quiz, while 4 percent completed all eighty quizzes. Of the students who completed at least one quiz, there was dramatic variation in how much money it took to get them to work, when considering the incentives on an hourly basis. At the 25th percentile of time preference, a student needed less than $3 to forego one hour of leisure time, but at the 75th percentile, a student required more than $18.
Then, using estimates derived from the effects of the incentives, they find differences in motivation among student groups. Female students are more motivated (i.e., they have less preference for leisure time) than male students, and black students are also slightly more motivated than white students. This leads the authors to the conclusion that “educational interventions that aim to decrease gender or racial performance gaps in math by motivating students through incentives or information about the returns to education may be misguided.” That’s because black students are already at least as motivated, according to their experiment, and because they find that academic efficiency has a much more powerful effect on the number of quizzes students take than time preference does.
They also look across these three districts and try to isolate the effect of the district on the students by seeing how students across the districts, which spanned the socioeconomic spectrum, varied in academic efficiency after controlling for all the student observable characteristics. The difference in district value-added between the wealthier and the poorer districts is about three fourths of a standard deviation, which is substantial.
The key implication of the study is that financial incentives are likely to be costly, and because achievement gaps don’t reflect gaps in motivation anyway, altering academic efficiency—not student motivation—is the “low-lying fruit.”
The experiment is clever and the findings are fascinating, but I see two important objections to the researchers’ claims about the varying importance of student motivation and academic efficiency.
First, they define academic efficiency as covering “differences in a child’s initial proficiency level, or differences in a child’s study process, academic support network, or innate ability.” We don’t know which of these factors contributes the most to academic efficiency, but they imply that this is mostly about levers that we understand and that are easy to implement. For example, in their conclusion to the paper, they suggest that “improvement in the quality of instruction” would improve efficiency, which is no doubt true. Yet increasing funding, improving the quality of instruction, or otherwise improving school inputs—even if they are well worth the associated costs—are unlikely to come close to equalizing academic efficiency, considering that so many other factors contribute to it. Some of the contributors to academic efficiency—student IQ, parent education, or previous levels of academic achievement—probably can’t be changed no matter what the intervention.
The second problem is that the claim that average motivation is just as high for Black students as White students contradicts all the other research on this subject that I have seen related to education or other life circumstances. Researchers have tended to find lower motivation, greater preference for leisure time, or higher “discount rates” of future benefits for Black respondents, poor respondents, and male respondents (e.g., here, here, and here). To be clear, I expect that most or all of any racial motivation gap reflects average differences in socioeconomic status across racial groups, not something inherent to students of one race or another. If researchers control for socioeconomic differences, as some have, both achievement and motivation gaps are bound to shrink or even disappear.
So if academic efficiency is actually very difficult to change and differences in engagement and motivation do explain some academic achievement gaps, it’s no longer clear that the key policy implication of this project—that improving academic efficiency is the low-hanging fruit—still holds. The resounding success of some recently-evaluated student incentive programs—particularly among traditionally advantaged student groups—is another reason to doubt this implication.
The results of this experiment have given us a lot to chew on. But given the broader research on motivation and student incentives, readers should be skeptical of the broad-brush claim that student incentives are generally ineffective.
SOURCE: Cotton et al. “Productivity Versus Motivation in Adolescent Human Capital Production: Evidence from a Structurally-motivated Field Experiment,” 2020 (NBER Working paper).