Every two years, educational researchers eagerly await scores from the National Assessment of Educational Progress (NAEP), eager to dive into the results and hypothesize reasons for changes—or lack thereof—in the data. But what NAEP actually reveals about education across the fifty states is more complex than simple score comparisons. In a recent report from the Brookings Institution, Matthew Chingos follows NAEP cohorts from fourth to eighth grade, finding that students in some states make gains at both levels, while others lose early gains by eighth grade or, conversely, catch up despite a slow start.

NAEP analyses typically compare students in the same grade across years. Instead, Chingos tracks cohort four-year growth from 2003 to 2017 (for example, comparing 2013 fourth grade scores to 2017 eighth grade scores). Rather than using raw NAEP scores, the report uses demographically adjusted scores calculated by Chingos and his colleagues at the Urban Institute. These adjustments use restricted-use student-level data to account for demographic differences across states, controlling for race, English language learner status, free or reduced-price lunch eligibility, birth month and year, frequency with which a language other than English is spoken in the home, and Individualized Education Plan status. NAEP tests different students in each administration, but tracking birth cohorts controls for statewide policy or economic changes that might affect 2011 fourth graders (the same cohort as 2015 eighth graders) differently than, say, 2017 fourth graders.

In math, Chingos finds a roughly positive correlation between cohort four-year growth and higher eighth grade scores, but many states deviate from that pattern. For example, Delaware and South Dakota had nearly the same eighth grade average score, but over ten points of difference in average growth; while Delaware’s students tested much better in fourth grade, South Dakota almost caught up within four years. Chingos posits that some states are teaching math skills earlier, thereby scoring higher on the fourth grade assessment, but that this does not mean students are actually learning more by high school, demonstrated by less improvement in later scores. He supports this by comparing a decade’s progress: All fifty states show an increase in fourth grade scores between 2003 and 2013, but only thirty show gains in the same cohorts’ eighth grade scores from 2007 to 2017. An average 7.6-point increase in fourth grade scores falls to an average of 0.3 points by eighth grade; in twenty states, the fourth grade gains were gone within four years.

Reading results show even more examples of states with similar average eighth grade scores but very different average four-year growth. However, while ten-year average gains in reading at the fourth grade level are lower than in math, at 3.3 points, less of that gain is lost by eighth grade, where the average gain is 2.6 points. Florida and Nevada had particularly notable persistence in their score gains, with Nevada fourth grade scores improving by 11.6 points from 2003 to 2013, and their eighth grade scores improving by 9.7 points from 2007 to 2017 (reflecting the same cohorts of students). In California, quick gains make up for an early deficit: Its fourth grade average score is nearly ten points behind that of North Carolina, but its eighth grade average is slightly higher.

Chingos’ cohort-tracking method addresses but cannot entirely overcome the primary limitation of NAEP—that every administration tests different individuals. And the demographic adjustment can only account for six factors on which NAEP collects student-level data.

The exploratory analysis highlights the importance of middle school, raising the question of what fourth grade improvement means if benefits appear lost within a few years. Middle school determines whether students will enter high school on track, and states should consider how to mitigate the risk of fade-out when crafting reforms.

SOURCE: Matthew M. Chingos, “What can NAEP tell us about how much US children are learning?” Brookings Institution, May 2018.

Emily Howell was a research intern at the Thomas B. Fordham Institute and had been pursuing a master’s degree in education policy at The George Washington University. She has a bachelor's degree in English from Yale University, and before returning to school she taught middle school in San Antonio, TX, and Washington, DC, in both charter and traditional public schools. Her research interests include school climate, teacher…

View Full Bio