“It’s like some bullsh-t way to get kids to pass.” That’s the cynical description of high school “credit recovery” programs an eleventh grader gave to the New York Post last year. But cynicism appears to be in order. These programs, which purport to help high school students make up work in courses that they’ve failed, are implicated in a new scandal every time you turn around.
From New York, to L.A., to North Carolina, to Virginia, to the nation’s capital, credit recovery is at least raising concerns from parents and educators worried about academic rigor, and at worst enabling administrators and students to collude in handing out diplomas whether the students have learned anything or not. An upbeat story on a local credit recovery program in the Tulsa World notes that a computer program “does most of the teaching” and that, using the program, “One student was able to earn credit for twenty-one courses, essentially an entire school year, in just a matter of weeks.”
Of course, schools have always had some form of credit recovery, such as summer school. The new generation of computer-based credit recovery programs, which aim to get more students passing while keeping costs low, are, however, of a different breed.
Local scandals have proliferated, and rigorous assessments of these new computer-based credit recovery programs are scarce. The few results we have confirm the need for skepticism, as one of the few researchers to conduct rigorous assessments of these programs puts it, “I do have a lot of concerns about the widespread use of online courses for credit recovery.”
Cautious readers will note that the rigorous impact studies have been of individual credit recovery programs, so other programs may be working better. At the same time, the scandals uncovered by journalists probably involve only the worst of these programs. In other words, we’ve heard some bad things about credit recovery, but are they representative? When researchers have conducted national studies of credit recovery programs, as I did in a co-authored Fordham report last year, the data are as thin as they are broad, and we can’t conclude anything about how the programs actually work beyond what types of schools have the programs and how many students they enroll.
A new report by Nat Malkus at the American Enterprise Institute aims to fill in some of this story. Where previous studies got into the weeds of a single program or, alternatively, looked nationwide without saying anything substantive about the programs themselves, Malkus’s team randomly selected and surveyed 168 districts from around the country to find out how their programs worked.
While there are fair-minded arguments for not strangling credit recovery with regulations, Malkus presents us with a Wild West, where districts buy these programs from a variety of for-profit vendors, (human) teachers are barely involved, and most districts exert little oversight. When Malkus looked across eight types of regulations districts might impose, three-fourths of the districts had adopted no more than three regulations. After suggesting that holding back on each policy could individually be justified, Malkus puts the lack of oversight this way: “Taken together, however, these policies offer little assurance that serious attention is given to quality and rigor.”
The fixes to these programs are obvious enough, and Malkus suggests some good ones. Districts (or states or schools) could require students to pass an external exam—either a state-developed test, a school-faculty-developed test, or even an oral presentation officiated by teachers in the department. (As it is, just 17 percent of districts have any external validation requirement.) Or they could require much more involvement from teachers both to ensure that the students are supported and to make sure they aren’t just brainlessly clicking through computer modules.
Yet the reason these fixes aren’t being implemented goes to the heart of the contradiction in these programs. The premise of credit recovery is to take the neediest, worst-performing students and put them in the cheapest, lowest-touch, computer-enabled environment. Sure, districts could require intense teacher involvement or rigorous external assessment. But if they did, the programs would no longer possess their dual raisons d'etre: they would no longer be cheap or easy to pass.
What no one has suggested, oddly, is that districts take greater precaution and press pause until they can figure out if they are actually upholding their academic standards. Are their programs rigorous? Are there external checks, or does the vendor—who is paid by the district, presumably to pass as many students as possible—have the last say? Are they setting the bar high, or is credit “recovered” with little effort, rendering it meaningless?
Researchers, ever concerned about the generalizability of our findings, have been timid about suggesting districts simply stop. Policymakers have watched graduation rates soar and have not wanted to rock the boat. Teachers have mostly been left out of the whole process.
With the accumulation scandals and other red flags, school boards and states can no longer look the other way. They must find a solution to the obvious moral hazards. And if that is going to take some time to figure out, they ought to simply stop these programs before they do further, irreparable damage to the meaning of the high school diploma.
When asked about teacher satisfaction in the credit recovery program, one official told Malkus’s team, “I don’t ask that question [to teachers], because I don’t care.” We can lament this administrator’s lack of commitment to academics, but, given the incentives, why should he care? We have set up a system where the graduation rate is king, and if outsourcing accountability to predatory for-profit vendors helps you get there, we’ve assured these district leaders that—until the reporters start poking around—we won’t ask too many questions.