By Michael J. Petrilli and Amber M. Northern, Ph.D.
According to the National Alliance for Public Charter Schools, 2016–17 was one of the slowest-growth years for charter schools in recent memory. Whereas the Race to the Top era usually saw an annual net gain of 360–380 more charters, by 2016–17 that increase dropped to roughly 120. Nobody knows for sure why this happened, but one hypothesis is saturation: With charters enjoying market share of over 20 percent in some three dozen cities, perhaps school supply is starting to meet parental demand, making new charters less necessary and harder to launch.[i] If so, perhaps it’s time to look for new frontiers.
One option is to launch more charter schools in affluent communities. This would not only provide opportunities for sector growth, but would also broaden the political base for these schools of choice. Fordham senior visiting fellow Derrell Bradford candidly assessed the risk associated with today’s relatively narrow base: “Our [charter sector’s] anchor constituency is black and Hispanic families who don’t vote in the same numbers or contribute the same dollars as, say, the affluent Nassau County moms who typify the opt-out movement.”
We understand the political logic and surely support efforts to expand charters wherever they might satisfy parental demand. But we couldn’t help but wonder: Are we overlooking neighborhoods in America that are already home to plenty of poor kids, and contain the population density necessary to make school choice work? Especially communities in the inner-ring suburbs of flourishing cities, which are increasingly becoming magnets for poor and working-class families priced out of gentrifying areas?
This dynamic—labeled the “Great Inversion” by Alan Ehrenhalt, a senior contributing editor for Governing Magazine—is familiar to those of us living in Washington and other booming cities. The District of Columbia is home to a thriving, high-quality charter sector that has benefited from supportive public policies and ample private philanthropy. That’s all well and good—but the city’s affluence has put it out of reach of many poor and working-class families. The District is home to roughly 37,000 poor children and 120 charter schools. Yet neighboring Montgomery County, Maryland, has many more low-income students—some 55,202 of them, a 50 percent increase—and exactly zero charter schools. And in chronically low-performing Prince George’s County, Maryland, there are 81,055 low-income students, but just eleven charter schools.
The inner suburbs of Washington, D.C. are awash in poor kids, but they’re charter deserts—causing us to wonder whether this is common in other places as well. (Maryland and Virginia, as is well known, have been charter-averse at the political and policy levels.)
As the geography of poverty in America changes, are there many neighborhoods with plenty of population density and lots of disadvantaged kids but few or no charter schools? Or do the schools actually set up shop where poor families live—whether in cities, small towns, or the suburbs?
Those are the questions that Fordham’s new report, Charter School Deserts: High-Poverty Neighborhoods with Limited Educational Options, and its accompanying website address. The report analyzes the distribution of charter elementary schools across the country to provide parents, policymakers, and educators with information about which high- and medium-poverty communities do not have access to charter schools today. These groups can use our findings to better understand the supply of schooling options in their states and cities and perhaps press for changes that would improve that supply. Likewise, charter operators and authorizers will find the data helpful as they consider where to establish new schools.
To conduct the study, we recruited Andrew Saultz, Assistant Professor at Miami University, whose primary area of research is school quality and accountability. Dr. Saultz previously studied factors related to charter openings in New York City, as well as where charter schools are located in Ohio relative to that state’s demographics. He was keen to expand the latter study and recruited a trio of talented graduate students, Queenstar Mensa-Bonsu, Christopher Yaluma, and James Hodges, to help with the mammoth task.
Since distance from home is a key factor in families’ school selections, particularly at the elementary level, Saultz and his team defined “charter school deserts” as areas of three or more contiguous census tracts with moderate or high poverty and no charter elementary schools.[ii] They also show where elementary schools are located relative to census data on poverty, mapping every traditional district and charter elementary school in the country using geographic information system (GIS) software.[iii] Their results highlight patterns of charter location for each state and what that means for how these schools are ultimately distributed.
As expected, they find that most charter schools are overwhelmingly located in large metropolitan areas. Yet almost all states with charter school laws also have deserts; specifically, thirty-nine of forty- two charter states have at least one desert each—and the average number of deserts per state is a worrying 10.8.
Six states have no more than two deserts: Alaska, Hawaii, Idaho, Iowa, New Hampshire, and Wyoming.[iv] Yet twelve other states have more than fifteen apiece: California, Florida, Georgia, Louisiana, Michigan, Missouri, New York, Ohio, Pennsylvania, South Carolina, Tennessee, and Texas. Make no mistake, that’s a lot of deserts—and particularly surprising in many of the latter states that are home to lots of charter schools.
The number of census tracts that comprise states varies greatly, however, so it’s helpful to look also at what proportion of a state’s high- to mid-poverty tracts are charter deserts. In seven states, it’s more than 30 percent (Alaska, Delaware, Georgia, Mississippi, Nevada, Rhode Island, and South Carolina), but in seven others, it’s less than 10 percent (California, Hawaii, Idaho, Indiana, Iowa, Louisiana, and New Hampshire).
To be sure, readers should consider these findings with some caution. They are approximate due to several reasons: the shape or positioning of some contiguous census tracts on a map means that deserts could in fact be “drawn” differently; some areas meet the definition of a charter desert not because they lack charter schools but because they lack inhabitants, meaning they are literal deserts, or otherwise barren (for example, central Alaska); and still other areas have few charter schools yet miss the cut-off for high-poverty status. Still, a quick look at the interactive map shows that charter school deserts can appear anywhere. In fact, analysts find that, on average, states have 7.7 charter school deserts in urban areas and 3.1 in rural areas.
We draw two key takeaways from these findings.
First, the charter sector needs to move beyond city boundaries. Many poor families are moving— or getting pushed out—to the suburbs. Chalkbeat recently published a piece titled, “As low-income families exit Denver, charter network KIPP is looking to follow.” It chronicled how gentrification in Denver is pushing low-income families to the surrounding suburbs, and reported that KIPP is considering following them there. Kimberlee Sia, the CEO of KIPP Colorado, said the network’s leaders “believe there is need beyond what is going on in Denver.”
The “need beyond” is not unique to Denver. This study documents charter deserts not only in cities, but in inner-ring suburbs and rural areas too, which means that we are palpably failing to locate schools where the greatest need exists. That’s not to say that middle class and more affluent families can’t or shouldn’t benefit from charter schools, nor does it negate the potential benefit to the charter movement of including more such families in our coalition. Our immediate point is simply to urge charter management organizations, other school operators, and philanthropies and organizations that boost, assist, encourage, and study charters to widen their gaze and consider opening schools in places that haven’t yet been on their radar but whose residents need more options.
Second, we must address the policy and practical barriers in some states that keep charter schools from locating where they are needed.
We already noted the challenge of political resistance in the states surrounding Washington, D.C. Recently, Robin Lake and her colleagues at the Center for Reinventing Public Education dug into the barriers that impede charter growth, particularly in the San Francisco Bay Area. The rising cost of doing business, a dearth of school facilities, and funder preference for particular locales were a few obstacles they cited. Yet, when these colossal hurdles collide with the Great Inversion’s funneling of low-income urban families into the suburbs, they thwart new charter schools. As Lake et al. explained:
Operators are finding it easy to access philanthropic funding in urban Oakland and San Francisco, but see those places as “over-saturated” and gentrifying. By contrast, in the less urban area of western Contra Costa County, there are more available facilities and a growing population of students that match most charter schools’ target populations—but fewer opportunities to access philanthropic dollars to start up new schools.
In short, if needy families and available facilities are increasing in numbers outside the city, so should the number of philanthropists willing to support them there.
But philanthropists, operators, and educators can’t forge new paths alone; they need their policymaking brethren in elected and appointed offices to adopt more supportive school choice policies. The current report provides several examples of states that restrict the number, expansion rate, and/or location of charter schools. Washington State, for instance, allows a total of forty charter schools statewide. Rhode Island permits just thirty-five. And Ohio limits charter openings to districts that the state considers “challenged.” Such policies stifle the creation and expansion of new schools in the numerous places that need them. Eliminating such policies should be high on reformers’ priority lists.
Our results suggest that some inner-ring suburbs and small towns are prime locales for rekindling charter growth. But that’ll only happen if funders, operators, and state and local policymakers expand their horizons. What’s the first step? Simple. Read this report and use our interactive map to locate your state, district, and neighborhood. Find the “charter deserts” nearby that contain sizable populations of needy kids who would benefit from the presence of more school options. Then roll up your sleeves and start irrigating.
[i] Of course, there are other possible explanations for reduced growth too—such as stronger quality control measures, more discerning authorizers, political backlash, and so on.
[ii] In some cases, charter school deserts may be identified differently based on how contiguous census tracts are positioned and how the circles that capture the deserts are drawn. In other words, the deserts are best viewed as visual approximations.
[iii] Analysts used school-level data from GreatSchools for 2014–15 since the database lacked information for nine states in the 2015–16 school year; consequently, schools that have opened and closed since 2015 are not present (or absent) in the analysis.
[iv] Note that some states, like Iowa and New Hampshire, have very few census tracts with 20 percent or more of their population living at or below the poverty line, thus not meeting our definition of charter school desert—despite the fact that they have very few charter schools. Hawaii is also a special case since its geography does not align well with our method of identifying charter school deserts (its population is distributed across several islands versus “contiguous census tracts”).
It’s hard not to sympathize with the striking teachers in several states. They’re not very well paid, inflation is creeping up, a lot of classrooms are crowded with kids and lacking in textbooks and supplies, and a number of state and local budgets for school operations are extremely tight and sometimes declining.
All that is true. It’s also true that, while U.S. kids and parents generally like and respect the teachers they know best, American schoolteachers as an occupational class don’t enjoy the status and esteem conferred upon their peers in some other countries. It’s wholly understandable that a number of them are dissatisfied with their lot. They show it in other ways besides wearing red, shutting down schools, and marching around. Particularly in schools serving disadvantaged youngsters, the places where we most need experienced teachers, there’s a great deal of turnover—both departures for less challenged schools and abandonment of the field altogether.
But several other things are also true, and they need to be kept in mind as we observe the sea of red, watch interviews with angry or teary teachers, and wonder what to do with the kids on Wednesday.
First, though state and local budgets in some places are tight because tight-fingered policymakers have cut taxes and slashed spending, in other places there’s just not as much revenue as was expected, due to slow recovery from the “great recession,” lower than anticipated economic growth—and sometimes the exit of wealthy people to places where taxes are lower! In a great many places, school budgets are tight because competing obligations to pay for non-discretionary activities are hogging more of the available money. Medicaid is a big one (and is squeezing out higher ed funding, as well), but so, too, are the pensions and associated benefits of retired public employees, many of whom are former teachers. The Pew Charitable Trusts reported two months ago that “Even states that have overcome the effects of the recession may face financial pressures that could shape their budgets now and for years to come. A number of state governments face fiscal constraints today because of inherited shortfalls, such as unfunded public pension and retiree health care liabilities that total more than $1.5 trillion nationwide, and recurring deficits between annual state revenue and expenses.” And Education Next reported in February that “pension costs, excluding Social Security and retiree health insurance, have grown from $520 per student in 2004 to $1,220 today—or from roughly 5 percent to 10 percent of current expenditures per student.”
Second, as I’ve previously noted, U.S. school systems continue to use available dollars to hire more teachers rather than paying more generous salaries to the teachers they’ve already got—which also means hiring more teachers rather than better teachers. Education Week reported last month that “Over the past two decades, the number of the teachers in U.S. schools has increased by 21 percent, while the number of students has only increased by 12 percent.” This is an old phenomenon, but it continues today, even in our era of lean budgets, with 13 percent more teachers now than four years ago but just 2 percent more pupils. That’s not true in every single state—and it’s revealing that two of the four states where student growth has outstripped teacher inflation are Oklahoma and Arizona, where recent protests by aggrieved teachers have been especially forceful. Consider the seeming paradox of classrooms overflowing (in some schools) with kids while ever more teachers are employed. But note, too, how many schools—mostly in other places—are half-empty and how many have been closed or mothballed due to declining enrollments. Chicago was down another 10,000 kids this past autumn, compared with a year earlier—and 32,000 since 2013, enough to fill fifty-three average-size schools. Though the teaching workforce often appears highly mobile, in reality Chicago teachers—with tenure, benefits, pensions, etc.—just aren’t very likely to move to Houston.
Third, while it’s true that U.S. teachers as a workforce don’t get the respect they would like—and that their counterparts enjoy in, say, Finland and Korea—this is due in no small part to the political actions and policy preferences of their own unions. By insisting on tenure after just a few years in the classroom, by protecting the jobs of even the weakest instructors, and by demanding that physical education teachers be compensated the same as physics teachers, they have fouled their own nests when it comes to status and esteem. Which also affects salaries. While it’s easy to say that dear hard-working Ms. Rosencrantz—who is hugely effective with her math students—should get paid a lot more, does lazy Mr. Guildenstern down the hall, whose pupils seems to watch a lot of movies and do poorly on the state tests, deserve the same raise?
Fourth and finally, although teachers and their representatives despise this observation and offer all manner of (unpersuasive) explanations and rationalizations, it’s still true that the typical day in American public schools lasts six or six and a half hours and there are 180 of them in a year. That’s a lot less time than is put in by most people with full-time jobs, and that discrepancy needs to be borne in mind when making salary comparisons. Yes, it’s sad that many teachers must make ends meet by taking second jobs. But it’s also sad that their school job leaves them with that extra time—and pays them accordingly. It’s not good for the kids, either.
So yes, let’s sympathize, but let’s also be hard-nosed (not hard-hearted) in understanding the circumstances and forces that have conspired to cause a bunch of unhappy teachers to take to the streets. And let’s understand that if we’re serious about ameliorating the conditions that aggrieve them, a great many things need to change in very big ways.
Last month I published a five-part critique of a recent AEI paper by Collin Hitt, Michael McShane, and Patrick Wolf that looked at the connection (or lack thereof) between test scores and long-term outcomes in school choice programs. Not surprisingly, last week Pat responded with a forceful rebuttal. I think many of his points missed the mark, as I noted on Twitter. But this one I liked:
The first rule of science is that you can’t prove a negative. The second rule of science is that the burden of proof is always on the person claiming that a relationship between two factors actually exists. One develops a theoretical hypothesis, such as “The achievement effects from school choice evaluations reliably predict their attainment effects.” One then collects as much good data as possible to test that hypothesis, certainly employing an expansive definition of school choice unless and until you have an overwhelming number of cases. One then conducts appropriate statistical tests on the data. If the results are largely consistent with the hypothesis, then one conditionally accepts the hypothesis: “Hey, it looks like achievement effects might predict attainment effects just as hypothesized.” If the results are largely inconsistent with the hypothesis, as in the case of our study, one retains a healthy amount of doubt regarding the association between achievement and attainment results of school choice evaluations. That’s what scientists do.
All fair, and a useful frame. But also telling, as we shall see.
Pat’s hyper-pithy hypothesis
“The achievement effects from school choice evaluations reliably predict their attainment effects.”
That is certainly parsimonious, but there are two problems with it. First, it’s simplistic. Which achievement effects? For all students, or certain subgroups? Are we talking about elementary, middle, or high schools? Which kinds? What counts as “attainment”? How big do the effects have to be? When would we expect these effects to move in the same direction, and when might we reasonably expect them to diverge? What exactly does “reliably” mean in this context? And what is the justification for that definition or standard?
The second problem with this hypothesis, as stated, is it’s only relevant to a small subset of the policy debates we’ve been having, and that Pat et al. referenced in their original paper. Yes, if this hypothesis is proven wrong—if it turns out that test scores don’t reliably predict important long-term outcomes—it would indicate that policymakers should be cautious about killing off school choice programs prematurely. Instead they should wait to see what their long-term impacts are, too, because there’s a decent chance that they will be more positive. On this I agree.
But the evidence examined against this narrow hypothesis would not tell us anything about the wisdom of holding individual schools accountable for short-term test-score changes, either within school choice programs or writ large. For that we’d need to craft a hypothesis, or set of hypotheses, that were directly related to that question.
My hypotheses about test-based accountability
So let me take a crack at identifying a trio of hypotheses that those of us who support test-based accountability would embrace and like to test. The first is about students, the second about elementary and middle schools, and the third about high schools.
- Students who learn dramatically more at school, as measured by valid and reliable assessments, will go on to graduate from high school, enroll in and complete postsecondary education, and earn more as adults than similar peers who learn less. This is the heart of the matter for test-based accountability: We think student achievement matters for individuals in the long run. Of course, there a whole bunch of caveats that any reasonable person would apply. Learning just a little bit more probably isn’t enough to affect the longer term outcomes much; to change a child’s life trajectory, the intervention has to be pretty dramatic. We are more likely to see big impacts for low-income kids, for whom schools matter more, than for affluent children, many of whom are likely to graduate from high school and college regardless of their K-12 experience. And if we had ways to measure other important skills, knowledge, and characteristics that schools work to inculcate in children but that don’t reveal themselves in tests of ELA and math, we might see an even stronger association between school-based learning gains and long-term outcomes. But still, kids who become a lot better at math, reading, and writing than they otherwise would have should go on to have better outcomes than those who don’t. If not, that’s a problem for judging schools based on test score changes.
- Elementary and middle schools that dramatically boost the achievement of their students should also boost their long-term outcomes, including high school graduation, postsecondary enrollment, performance, and completion, as well as later earnings. All of the caveats from above apply here, too.
- High schools that dramatically boost the achievement of their students should also boost their long-term outcomes, including postsecondary enrollment, performance, and completion, and earnings. Same caveats apply. But note, too, a critical difference from elementary and middle schools. For the former, high school graduation is a legitimate “long term” outcome. But for high schools, it’s another short-term indicator, akin to test scores. And we know from prior research that high-expectations high schools may boost achievement while decreasing their graduation rates, as some kids decide they are not up for the challenge. So I would never hypothesize that we’d see high school achievement and graduation rates moving in the same direction. We also would have a different hypothesis for certain types of high schools, as I explained in my original critique. Career and technical Education, early college, and selective enrollment high schools, in particular, would be expected to have different outcomes for achievement and attainment, given their idiosyncratic missions and student populations.
Note that all three of my hypotheses call for “dramatic” learning gains, as I believe those are what will lead to changes in students’ life trajectories. Many of us testing hawks are big fans of KIPP and other high-performing charter networks and want to see them replicated because they are real outliers when it comes to student achievement. We believe that they are changing lives because their students are making gains that are much larger compared to similar peers. But we wouldn’t necessarily assume that a school performing at the fifty-fifth percentile would yield better results than a school at the fiftieth percentile when it comes to real-world outcomes.
The same goes for the flip side of accountability: intervening in or closing down chronically low-performing schools. No state accountability system or charter school authorizer goes after institutions performing at, say, the fortieth or forty-fifth percentile in student achievement growth. Rather they target those at the fifth or tenth percentile—those that are several standard deviations from the mean, schools where students are making virtually no progress from year to year, or even going backwards. So what we want to know from research is: What are the odds that those chronically low-performing schools are having a positive impact on kids? That’s a very different question from the one Pat asked: whether schools or programs that do marginally better or worse on test scores do marginally better or worse on attainment. And my hypotheses—which should be tested empirically—assume that it’s extremely unlikely that very low-performing schools are somehow helping their students prepare for long-term success.
Pat is right, then, that we need to be clear about the hypothesis we’re testing. The review that he completed with Collin and Mike is appropriate for examining the relationship between achievement and attainment effects in school choice evaluations. (Though my serious qualms about which studies they included and how they analyzed the findings still stand.)
But that review is not at all appropriate for examining the assumptions—okay, hypotheses—upon which test-based accountability rests. Pat and his colleagues were stretching far beyond their findings when they wrote, in the original AEI paper, that “insofar as test scores are used to make determinations in ‘portfolio’ governance structures or are used to close (or expand) schools, policymakers might be making errors.”
Policymakers might be making errors—but we can’t know that from the studies that Pat and his colleagues examined. And that’s what’s wrong with their review: They went searching for evidence to disprove an overly simplistic hypothesis that is ultimately irrelevant to much of the debate over test-based accountability.
As Pat sometimes observes, I’m not a “scientist.” But I believe that the hypothesis that he and his colleagues claim to have disproved is what scientists would call a straw man, no?
On this week's podcast, Bibb Hubbard, founder and president of Learning Heroes, joins Mike Petrilli and Alyssa Schwenk to discuss better ways to communicate students’ academic progress (or lack thereof) to parents. On the Research Minute, David Griffith examines the recent AEI study that questioned the relationship between test scores and long-term outcomes.
Amber’s Research Minute
Collin Hitt et al., “Do Impacts on Test Scores Even Matter? Lessons from Long-Run Outcomes in School Choice Research,” American Enterprise Institute (March 2018).
There’s chronic and growing disenchantment with the quality of university-based teacher education schools and their ability to adequately prepare the nation’s teachers. The discord reached new heights with Arthur Levine’s groundbreaking 2006 report that found that “current teacher education programs are largely ill-equipped to prepare current and future teachers for new realities.” Five years later, Cory Koedel conducted an eye-opening study on grade inflation, which found that “students who take education classes at universities receive significantly higher grades than students who take classes in every other academic discipline.”
Enough, said The National Council on Teacher Quality, which decided about the same time that it would cast much-needed light on the caliber of teacher training in American universities. After a comprehensive analysis of over 1,000 programs, including in-depth reviews of university syllabi and other programmatic materials, it issued for the first time in 2013 a highly visible and contentious report that ranked teacher education programs, known as the Teacher Prep Review.
This new CALDER study conducted by Dan Goldhaber and the aforementioned Cory Koedel examines whether teacher education programs were responsive to these publicly-released evaluation ratings. Specifically, would they respond to an “information experiment” designed to change their practices, which would, in turn, increase their public rating? In particular, the study first investigates whether teacher-ed programs changed in response to their ratings, and second, if given a customized “nudge” that explained how they could increase their particular program’s rating in the future, whether they would actually do it.
The study focuses on elementary education programs with published ratings in 2013 through 2016. On the descriptive front, they find that program ratings appear to be linked to program characteristics. For example, private institutions tend to be rated lower and institutions with higher tuition and entrance exam scores tend to be rated higher. They also find that over the three years, the ratings improved overall for 26 percent of programs, declined for 14 percent, and stayed the same for 61 percent.
Now for the experiment they conducted. They assigned each program to a specific recommendation that would boost their rating in particular. For instance, researchers recommended that programs do such things as raise their grade point average for program admittance to 3.0, or observe and provide written feedback to student teachers at least five times—both of which are positively rated in the Teacher Prep metric and would boost scores. These recommendations were intended to be “low hanging fruit” that were do-able in fairly short order, as opposed for example, to revamping their academic curriculum for their program which might take longer. Analysts randomly selected half of the programs within each recommendation group (i.e., those receiving the same recommendation) to the treatment condition, whereby the program administrator and university president received a customized letter via email from the analysts explaining the recommendation and how it would improve their rating. These emailed letters were sent the last week of July 2013, close to when the inaugural program ratings would appear in U.S. News & World Report.
The key finding was that treated programs actually had slightly lower ratings from 2013 to 2016 than those in the control group. The decrease was 0.13–0.15 rating points, which is about 22 percent of a standard deviation. Analysts try to figure out this head-scratching result and hypothesize that perhaps their recommendations weren’t that feasible after all since raising the GPA could obviously mean losing students, especially since just 9.4 percent of undergraduate programs had a 3.0 minimum GPA as of 2013. They also discuss the hostility toward the ratings from the larger teacher education community and posit that their extra “touch” may have inflamed existing animosity. Finally, they suggest that perhaps their experiment was initiated too early since prior research has shown that “nudge interventions” are quite sensitive to timing. Regardless, it is a curious finding since the broader literature shows that post-secondary institutions are indeed quite responsive to public rankings such as the annual college rankings by Barrons and U.S. News & World Report.
But let’s not forget the bright silver lining here based on the descriptive part of the analysis: About a quarter of the programs improved their ratings after the report was released. Cue NCTQ President Kate Walsh: “While we cannot definitively assert that we caused these improvements, we think it is highly likely that the Teacher Prep Review played a substantial role in moving the ball yards—not inches—toward the goal.”
We couldn’t agree more.
SOURCE: Dan Goldhaber and Cory Koedel, “Public Accountability and Nudges: The Effect of an Information Intervention on the Responsiveness of Teacher Education Programs to External Ratings,” Calder (March 2018).
One of the animating spirits of the rise of STEM education is the push for innovation—new technologies, new applications, new solutions to intractable problems. But is cultivating that creative ability as common an outcome in students as tech enthusiasts would lead us to believe? A recent study by a team of researchers from the University of California San Diego (UCSD) attempted to determine whether a gift for innovative thinking is merely something that prompts students to choose STEM classes, or whether it can be cultivated among those who believe they do not have that gift. The conclusion is that with billions of dollars of investment in STEM, American K–12 education could be putting its eggs in an unstable basket.
The locus of their work was an app-design contest open to all undergraduate students taking classes at UCSD’s Jacobs School of Engineering, whose winners would be determined by a panel of tech entrepreneurs and executives. Prize money was offered for the top three finishers, and the winners need not have progressed as far as a useable electronic prototype in order to compete; judges would be looking at things such as written plans and design mock-ups as seriously as they would live products. Motivated by outreach such as newsletters, emails, and information sessions, 103 students applied to participate in the contest as “self-selecting innovators.” A random group of students who did not sign up for the contest was then offered a $100 financial incentive to sign up. Eighty-seven students did so, and the researchers categorized them as “induced innovators.” These two groups were then randomly split with half of each group given a further treatment in the form of confidence-boosting emails (a “motivational treatment”) through the first several weeks of the contest. Data collected included pre-contest and post-contest surveys and the amount and quality of contest submissions.
Predictably, the majority of self-selecting participants were engineering and computer science majors, while the induced innovators were less likely to be drawn from these fields of study. Additionally, the induced innovators had lower GPAs than their self-selecting peers.
The results: The financial inducement seemed to achieve the goal of expanding the number and diversity of contest entrants. More importantly, the two groups ultimately submitted projects at more or less the same rate despite the difference in assumed capability and motivation for innovation. The submission rate for the full cohort of participants—approximately 10 percent—was deemed normal for such contests. The motivational treatment, however, showed an insignificant impact on both the number of projects submitted and the quality of those projects overall, although the small submission numbers made definitive analysis difficult. In the end, average judges’ scores were indistinguishable between self-selecting and induced innovators; the quality of their work outputs were essentially equal.
While the evidence suggests that “innovativeness” is not strictly an innate trait of certain individuals, the effects of financial inducement and motivational treatment interacted in surprising ways and had some counterintuitive effects on student performance. For example, though the induced innovators with below-median GPA performed the worst of all the groups, they were the individuals who benefitted most from the motivational treatment. Motivational treatment led to a small boost in project submission rates and project quality ratings. Additionally, those individuals receiving motivational treatment who did not ultimately submit a project were more likely to report that lack of time was the reason rather than perceived lack of ability to compete. The highest performers in the contest overall were self-selecting innovators also with below-median GPA, but the motivational treatment appeared to exert downward pressure on their average scores by comparison to the above-median GPA self-selecting innovators. The researchers concluded by raising questions related to individual characteristics not tested. What stops a student who can succeed in the innovation task from self-selecting into the contest? How much financial inducement is enough to incent innovation? Why do motivational messages received by self-selecting students with the most apt background and qualifications appear to interfere with the highest levels of successful innovation?
Although this study examines a STEM initiative in a college setting, who participates in and who benefits from the boom in K–12 STEM education is also a vital question. Future English majors, future attorneys, and many other students could be left behind if technology education becomes the pre-eminent focus of K–12. Additionally, STEM education without innovation could easily become a muddled mess of poorly coordinated traditional lesson plans cloaked in twenty-first-century buzzwords. This new research asserts that the pipeline of innovators is larger than it would at first appear. Proper incentives along the way seem to be a promising way to bring more students into the STEM fields, but we also need to know how to help them be successful once they’re in.
SOURCE: Joshua S. Graff Zivin and Elizabeth Lyons, “Can Innovators be Created? Experimental Evidence from an Innovation Contest,” National Bureau of Economic Research (February 2018).