The Ohio Coalition for Quality Education (OCQE) has hit the airwaves in an effort to change the state’s accountability policies. The group claims that Ohio doesn’t take into account differences in student demographics across schools—and is thus unfair to schools educating at-risk pupils. Along with the Electronic Classroom of Tomorrow (ECOT), they are promoting the adoption of a new accountability measure that they believe will solve the problem.
The trouble with their argument is that Ohio policymakers have already implemented a robust measure—value added—that takes into account student demographics. Given what these groups are lobbying for, it is important to review the basics of student achievement, demographics, and school accountability, including value-added measures.
Let’s first keep in mind that the concerns about student demographics and educational outcomes are hardly new. For decades, analysts have recognized the link between demographics and achievement. The famous “Coleman report” from 1966 was among the first studies to show empirically the massive achievement gap between minority and white students. Gaps by race or income status remain clearly evident in today’s NAEP and state-level test data.
These stark results, of course, call into question the use of pure achievement measures (e.g., proficiency rates) for school accountability purposes. As we pointed out in a recent article that we co-authored with the California Charter Schools Association, using achievement measures alone would constitute poor policy making. That’s because it essentially penalizes schools for having higher percentages of disadvantaged students, who almost invariably perform worse. (Breaking that connection between demography and destiny is, of course, what much of the education reform movement is about. Alas, relatively few schools have managed to do it thus far.)
Recognizing that schools deserve credit for the growth students make over time (regardless of their starting points), many states have expanded their accountability systems by adopting a “student growth” or “value-added” measure. (The brand-new federal law, which replaces No Child Left Behind, strongly encourages such an approach.) Fordham has long been a proponent of growth measures such as these: In a joint publication in 2006, we recommended that Ohio fully implement a value-added accountability measure for all public schools, including charters. We published a “primer” on the state’s value-added system in 2008 and have used schools’ value-added ratings in our annual analyses of report cards.
So what is the purpose of value added? How does it work, and how does it address concerns about demographics?
Value added gets at the central issue of school effectiveness. The question at the heart of any school evaluation ought to be: Is the school having a positive impact on student learning? Aside from randomized experiments (which aren’t feasible at scale), value-added methodologies remain the most rigorous empirical method for gauging the effectiveness of a school. Their statistical methods aim to isolate the unique contribution of a school: As the Council of Chief State School Officers states,
The main purpose of value-added measures is to separate the effects of nonschool-related factors (such as family, peer, and individual influence) from a school’s performance at any point in time so that student performance can be attributed appropriately.
Value added is premised upon individual student learning gains tracked over time. Statistical analyses leverage student-level data to track students’ achievement gains over time.[1] These gains are premised upon students’ prior test results. By using students’ prior achievement—as Ohio’s value-added method does—the effect of student demographics is almost entirely accounted for, even without explicitly using these variables as controls (e.g., race or ethnicity, income status, and so forth). In other words, the consistent effect of demographics throughout a child’s learning experience is baked in, via his or her own past test results.
Value added sets a standard expectation of growth for all schools. Because value added is premised on student growth, state policy makers can set a common growth standard for all schools with value-added data. As defined in Ohio law, the expectation for schools is that their students make a year’s worth of growth in a year of time. If a school has mainly high-achieving students, they are expected to contribute a year’s worth of learning; the same goes for a school with primarily disadvantaged students. Critically, this ensures a consistent standard across schools—and as such, value added isn’t lowering the growth expectations for our most disadvantaged children. The technical documentation on Ohio’s value-added measure makes the following point: “Through this approach, Ohio avoids the problem of building a system that creates differential expectations for groups of students based on their backgrounds.”
Value added doesn’t correlate with demographics. Almost no correlation exists between economic disadvantage and value-added results. Chart 1 shows the link between value-added index scores in Ohio and economic disadvantage at a school level. The correlation is very low (-0.19), indicating that schools with high percentages of low-income students are not receiving lower value-added ratings. In other words, unlike pure achievement measures, low-income schools are not systematically penalized under the value-added measure. Similar correlations exist in the 2012–13 data; see here and here.
Chart 1: Relationship between value-added results and percent economically disadvantaged, Ohio schools, 2013–14
[[{"fid":"115258","view_mode":"default","fields":{"format":"default"},"type":"media","link_text":null,"attributes":{"style":"height: 367px; width: 600px;","class":"media-element file-default"}}]]
Source: Ohio Department of Education Notes: The value-added index score is a school’s average estimated gain (in NCE units) divided by the standard error. The index score is used to determine a school’s A–F value-added rating. An index score of at least 2.0 is the threshold for an A letter grade. “Economically disadvantaged” generally refers to students eligible for free and reduced-price lunch (185 percent of federal poverty level). The correlation on school-level mobility rates and value-added index scores is also virtually nonexistent (-0.15).
In fact, it’s worth emphasizing that quite a few high-poverty schools are making an exceptional impact on student achievement. Table 1 lists the top ten high-poverty schools in Ohio on the value-added metric; several of these high-flying schools are charters. These examples are proof that all schools, regardless of their demographics, can earn stellar marks on the value-added dimension of state report cards.
Table 1: Top value-added scores among high-poverty schools (90 percent economically disadvantaged or more), Ohio schools, 2013–14
[[{"fid":"115259","view_mode":"default","fields":{"format":"default"},"type":"media","link_text":null,"attributes":{"style":"height: 201px; width: 400px;","class":"media-element file-default"}}]]
There’s no reason Ohio’s e-schools couldn’t be on this list. Yes, they tend to serve disadvantaged populations, but as long as they help their students make a lot of progress from one year to the next, they can earn an honors grade for value added. But they don’t.
It’s important to keep in mind that value-added methods have limitations. They are constrained by the amount and precision of the data collected (a problem for any rigorous accountability measure). And in Ohio, value added doesn’t cover as many grades or subjects as we’d like: Presently, value added covers only grades 4–8—there are plans to extend it into high school once end-of-course exams are phased in—and to date, there haven’t been publicly available value-added data in science and social studies. Meanwhile, the broader research community continues to discuss how value added should be properly used and understood, particularly when applied to teacher evaluations. For examples of thoughtful dialogue on value added from leading scholars, see here, here, or here.
The point is this: If demographics are truly the driving concern for an advocacy group like OCQE, they’re barking up the wrong tree. Value added carefully controls for the influence of demographics by premising gains on a student’s prior test scores—the student serves as her own control—and as a result, schools are placed on a more even playing field for accountability. Ohio policymakers should further incorporate value added into the school accountability system, refine it where necessary, and de-emphasize (though not abandon) achievement-based metrics.
All this raises some tough questions for OCQE and ECOT: Given Ohio’s value-added measure, why are they lobbying for a new accountability measure under the guise of fairness? Is their true concern the welfare of disadvantaged students, or are they just searching for a measure that produces results more to their liking?
[1] Ohio, like Pennsylvania, North Carolina, and Tennessee, employs statisticians at SAS to conduct value-added analyses. A number of other states use “student growth percentiles,” a somewhat similar way of measuring growth.