Back in 2011, the Obama administration released its plan for improving teacher education. It included a proposal to revise Title II regulations under the Higher Education Act to focus on outcomes-based measures for teacher preparation programs rather than simply reporting on program inputs. It wasn’t a smooth process. Serious pushback and a stalemate on a federal “rulemaking” panel followed. Draft regulations were finally released in 2014, but were immediately met with criticism. Many advocates wondered if the regulations would ever be finalized.
On October 12, the wondering ceased—the U.S. Department of Education at last released its final teacher preparation regulations. While the final rules number hundreds of pages, the provisions garnering the most attention are those outlining what states must annually report for all teacher preparation programs—including traditional, alternative routes, and distance programs. Indicators are limited to novice teachers[1] and include reporting placement and retention rates of graduates during the first three years of their teaching careers, feedback via surveys on effectiveness from both graduates and employers, and student learning outcomes. These indicators (and others) must be included on mandatory institutional and state teacher preparation program report cards that are intended to differentiate between effective, at-risk, and low-performing programs.
The public nature of the report cards ensures a built-in form of accountability. States are required to provide assistance to any program that’s labeled low-performing. Programs that fail to earn an effective rating for two of the previous three years will be denied eligibility for federal TEACH grants, a move that could incentivize aspiring teachers to steer clear of certain programs.
What do these new federal regulations mean for the Buckeye State? Let’s take a closer look.
The Ohio Department of Higher Education already puts out yearly performance reports that publicize data on Ohio’s traditional teacher preparation programs. Many of the regulations’ requirements, like survey results and student learning outcomes, are included in these reports, so the Buckeye State already has a foundation to work from. But right now, Ohio releases its performance reports for the sake of transparency. Institutions aren’t differentiated into performance levels, and there are no consequences for programs that have worrisome data. In order to comply with the federal regulations, Ohio is going to have to start differentiating between programs—and providing assistance to those that struggle.
Helpfully, the differentiation into three performance levels occurs at the program level, not at the institutional level. This matters because the institutional label is an umbrella that covers several programs, and programs don’t always perform equally well. For example, in NCTQ’s 2014 Teacher Prep Review, the University of Akron’s (UA) undergraduate program for secondary education earned a national ranking of 57. But UA’s graduate program for secondary education earned a very different grade—a national ranking of 259. Using NCTQ’s review as a proxy for the upcoming rankings reveals that grouping all the programs at a specific institution into one institutional rating could hide very different levels of program performance.
Meanwhile, the regulations’ student learning outcomes indicator presents an interesting challenge. This indicator requires states to report annually on student learning outcomes determined in one of three ways: student growth (based on test scores), teacher evaluation results, or “another state-determined measure that is relevant to students’ outcomes, including academic performance.”
Requiring teacher preparation programs to be evaluated based on student learning won’t be easy for Ohio (or many other states). If Ohio opts to go with student growth based on test scores, it’s likely this will mean relying on teachers’ value-added measures. If this is indeed the case, the familiar debate over VAM is sure to surface, as is the fact that only 34 percent of Ohio teachers actually have value-added data available[2]. Even if Ohio’s use of value-added is widely accepted, methodological problems also exist. For instance, the federal regulations’ program size threshold is 25 teachers, and smaller preparation programs in Ohio aren’t going to hit the mark each year. This means that while bigger programs are going to be held accountable for student learning outcomes during graduates’ first three years of teaching, smaller programs aren’t going to be held to the same standard. There’s also the not-so-small problem that value-added data are most precise when they take into account multiple years of data—and novice teachers simply won’t have multiple years of data available.
Using overall teacher evaluation results isn’t a much better alternative. The Ohio Teacher Evaluation System (OTES) needs some serious work—particularly in the realm of student growth measures, which could imprecisely evaluate teachers in many subjects and grade levels due to the use of shared attribution and Student Learning Objectives (SLOs). The third route—using “another state determined measure”—is also challenging. If there was a clear, fair, and effective way to measure student learning without focusing on test scores and teacher evaluations, Ohio would already be using it. Unfortunately, no one has been able to come up with anything yet. The arrival of new federal regulations isn’t likely to inspire a sudden wave of quality ideas.
In short, none of the three options provided for measuring student learning outcomes is a good fit. Worse yet, Ohio is facing a ticking clock. According to the USDOE’s timeline, states have the 2016-17 school year (which is already half over) to analyze options and develop a reporting system. States are permitted to use the 2017-18 school year to pilot their chosen system, but systems must be fully implemented by 2018-19. Whatever the Buckeye State plans to do in order to comply with the regulations, it’s going to have to make up its mind fast.
While the regulations’ call for institutional and state report cards is a step in the right direction in terms of transparency and accountability, implementation is going to be messy and perhaps impossible. There are no clear answers for how to effectively evaluate programs based on student learning outcomes. Furthermore, the federally imposed regulations seem to clash with the flexibility that the ESSA era was supposed to bring to the states.[3] Unless Congress takes on reauthorization of the Higher Education Act, it looks like states are going to have to make do with flexibility under one federal education act and tight regulations (and the resulting implementation mess) under another.
[1] A novice teacher is defined as “a teacher of record in the first three years of teaching who teaches elementary or secondary public school students, which may include, at a state’s discretion, preschool students.”
[2] The 34 percent is made up of teachers whose scores are fully made up of value-added measures (6 percent); teachers whose scores are partially made up of value-added measures (14 percent); and teachers whose scores can be calculated using a vendor assessment (14 percent).
[3] It’s worth noting that the provisions related to student learning outcomes did undergo some serious revisions from their original state in order to build in some flexibility. The final regulations indicate that the Department backed off on requiring states to label programs effective only “if the program had ‘satisfactory or higher’ student learning outcomes.” States are also permitted to determine the weighting of each indicator, which includes determining how much the student learning outcomes measure will impact the overall rating.