Another attack on accountability from the Ohio House, this time on school ratings
Please note the update posted on May 15th at the bottom of this blog post.
Please note the update posted on May 15th at the bottom of this blog post.
Please note the update posted on May 15th at the bottom of this blog post.
Tucked into a raft of House amendments to the governor’s budget legislation are changes to state law that would have serious ramifications for school accountability and transparency. The House proposals, never debated in the chamber’s education committee, would drastically alter the way Ohio produces overall school ratings.
Under current law, those ratings are calculated using a weighting system that incorporates multiple dimensions of academic performance. The House’s amendment, however, would throw out this more holistic system and instead use just one measure—the higher of either the performance index or value-added progress rating—as the overall grade.
It’s critical to get overall school ratings right, as they aim to offer the public clear, prominent signals about the performance of districts and schools. They are also used to determine when the state has an obligation to intervene in chronically low-performing schools for the sake of the students attending them. Though legislators could and should refine the current approach to assigning overall ratings (here’s our suggestions), the House proposal uses a machete when a scalpel is in order. It’s a bad idea. Here’s why.
The House proposal creates a distorted view of overall school quality
Most of us would agree that a student with a B in math but an F in English shouldn’t be considered fully on-track. Why? Students need a strong foundation in both subjects, not one or the other. In like manner, we need to consider both measures of student achievement and growth when viewing overall school quality. Recall that achievement—as captured by the performance index—considers how students fare at a single point in time, providing information about whether pupils are meeting state academic standards (answering the question, “Are kids on track?”). Growth, or “progress,” as measured by value added, completes the picture by gauging how student performance changes over time (answering the question, “Are kids catching up?”).
To their credit, Ohio policymakers have long understood that achievement and growth matter, and that both should be incorporated into the overall rating. But under the amendment, a one-sided view of academic performance would emerge. This has real-world implications. Consider an example: A few years ago, Dayton Public Schools received an “A” value-added rating, its only top mark on the measure since the state began assigning letter grades. Under the House proposal, the district would’ve received an overall “A” for the year—a rating that would put it in elite company. But such a rating sweeps under the rug the persistent achievement struggles in Dayton where only one in four students meet state academic standards, and even fewer go on to complete college. While district officials might celebrate, families and community members would likely miss important information about pupil achievement.
Though this is one example, there are dozens, maybe hundreds, of cases at a school-level where the proposed system will give satisfactory-to-stellar marks to schools where tragically few students read, write, and do math proficiently. (It may also conceal schools where high-achieving students are aren’t progressing.) Ohio families and taxpayers deserve a clear sense of whether students are achieving at high levels and making solid growth from one year to the next. Unfortunately, awarding schools the higher of the two ratings covers up potential weaknesses and creates distorted views of school quality.
It unnecessarily softens school accountability
It’s no secret that accountability for student outcomes is under fire, mainly from the adults working in the systems being held to account. The House proposal is yet another push for softer accountability. Because it calls for the use of the higher of the performance index or value-added measure—instead of combining them—districts and schools would receive rosier overall ratings. Consider the distribution of districts’ actual overall ratings from 2017–18 (solid bars) and projected overall ratings under the House plan (striped bars). Predictably, the House’s approach inflates overall grades, with an additional 103 districts receiving A’s and an additional thirty-five districts receiving B’s. Conversely, fewer districts receive the less superlative grades. This is not to say that there is a “right” or “wrong” distribution. But it does suggest that this attempt to upend school ratings may be less about improving accountability structures and more about using state law to produce a cheerier picture of school performance.
Figure 1: Distribution of district overall ratings under the current method and House proposal
It gives low-performing schools a free pass
Since overall ratings are linked to consequences for poor performance, inflated ratings would allow some low-performing district schools to escape consequences. For instance, just a single year of an “A” rating on value-added—or any rating above an “F”—would enable otherwise deeply troubled schools to avoid school improvement efforts under the House’s proposal that ditches district-level interventions via academic distress commissions and focuses on improvement at the school level. This is apt to occur, especially under another unwise House proposal wherein the state would revert to using a one-year, instead of multi-year, value-added score that is more susceptible to “swings” between letter grades. (Recall the example above of Dayton Public Schools, which received an “A” almost by chance in a year when a one-year score was used.) Moreover, the juiced school ratings might also allow poor performing charter schools to avoid closure. Finally, in a truly troubling move, it appears that the House legislation would extend yet another “safe harbor” period in which schools are shielded from sanctions associated with poor results. The bill prohibits the use of school ratings to determine consequences in a year when any change, no matter how minor, is made to the rating system, and it also calls for a “reset” in the timelines that determine sanctions (e.g., the three-year timetable for determining automatic closure of low-performing charter schools).
Fortunately, there was one silver lining in the wreckage: The House, realizing that inflating school ratings would have a detrimental effect on school choice programs connected to the accountability system, including eligibility for EdChoice scholarships, passed a last-minute amendment ensuring their proposed changes don’t affect students’ choice options. Nevertheless, this move hardly redeems the larger gutting of state accountability laws.
* * *
When it comes to the House’s proposals around report cards and school accountability, lawmakers need to pause and remember their purpose: To give educators, policymakers, and most importantly parents a clear, honest accounting of how schools perform on critical gauges of student achievement and growth. Though there is room for improvement in the state’s report card system, an either-or approach to the overall rating is not the right way forward. Here’s hoping that as the bill moves through the Senate, legislators will think twice about this provision.
Update May 15, 2019: An alternate interpretation of the House overall ratings proposal has surfaced since the drafting of this piece. This blog post above was written based on a Legislative Service Commission (LSC) bill analysis indicating that the overall rating would be based solely on the higher of either the performance index or value-added progress rating (and thus excluding the other report card components such as Gap Closing or Graduation). The Columbus Dispatch also ran an article based on this reading of the legislation. However, without any change in the legislative language, LSC later amended its analysis (after this blog post was written) to indicate that the other report card components would be included in the computation of overall ratings. The actual language in the House-passed Amended Substitute House Bill 166 (see lines 23632-23654) is ambiguous, perhaps resulting in the confusion.
The arguments and conclusion of the foregoing piece are not substantively different under the revised LSC interpretation, though the projected distribution of districts’ overall ratings shown in Figure 1 below would be different. (Under the revised LSC interpretation, the State Board of Education would probably need to redo the rating formula.) Either reading of the legislation, however, is likely to yield systematically higher district and school ratings.
- Aaron Churchill
Over the next month or so, thousands of Ohio students will cross the stage and receive diplomas at their high school graduation ceremonies. It will be bittersweet, though, because many of these students will have earned their diplomas under a much-debated set of weakened graduation requirements. It will be the same story next year for students who are juniors today.
We at Fordham have been insistent and consistent in our criticism of these alternative requirements. We have called for a more careful examination of their unintended consequences, and have pointed out the serious flaws of the long-term proposal offered by the state board and the department for the classes of 2021 and beyond.
Given our persistent disapproval, one might be forgiven for thinking that that we prefer to critique ideas instead of offering solutions. Alas, that’s not true. We recently partnered with Ohio Excels, a non-profit coalition of business leaders dedicated to improving education in Ohio, and the Alliance for High Quality Education, a group representing around seventy Ohio school districts, to craft a proposal for graduation requirements that fixes some issues with the current policy without throwing out the baby with the bathwater. Under this plan, students would need to fulfill three requirements in order to graduate:
Requirement |
Details |
Course completion |
Students must earn the minimum high school course credits as determined by the state and district. |
Demonstrate competency |
Students must earn a score of “competent” on the state’s Algebra I and English II end-of-course (EOC) assessments.
|
Preparation for College or Career |
Students must earn two diploma seals, one of which must be state defined.
|
If enacted, this plan would go into effect for the classes of 2023 and beyond. The classes of 2021 and 2022 would be free to either complete these three requirements or select one of the three pathways that were in law when they entered the ninth grade.
Like the proposal put forth by the department and the state board, these requirements allow more personalization. But unlike the state’s plan, each of these pathways are objective, comparable, and valid. For example, consider the options that are available to students who fail to pass end-of-course exams in math or English. Although there are several choices, students are required to complete at least one objective measure of competency—whether it’s college courses, work-ready exams or job experiences that demonstrate career readiness, or fulfilling the requirements for enlisting in the military. Unlike GPAs and capstone projects, which are key aspects of the state’s plan, these measures would be difficult to game and are out of the hands of local educators who might be pressured to pass students even if they haven’t demonstrated competency. They are also more reliable, as every student will be held to the same grading standard and the results are easily comparable across schools, districts, and geographic areas.
The plan also recommends reducing the testing burden on students and schools without dismantling the state’s ability to track student achievement and progress in basic subjects over time. Under current law, students can accumulate points based on their performance on seven end-of-course exams to graduate. This proposal reduces the number of tests to five by eliminating the English I and Geometry assessments. Students would still be required to complete the biology, American history, and American government exams, and the results would continue to be used for school accountability. But passing all of these state tests would not be required to earn a diploma. Instead, students would only need to demonstrate their mastery of Algebra I and English II. The cut score for these assessments—referred to as “competent”—would be determined by the Governor’s Executive Workforce Board in consultation with higher education, the business community, the K–12 sector, and career-tech representatives.
Finally, and most importantly, this plan advises the state to identify no later than ninth grade every student who may be at risk of not graduating on time. Identification could include those who are behind on credits or have earned at least one F in a core academic course, are chronically absent, or have a high number of suspensions. Schools would notify families that these students are off-track and would empower educators to offer critical interventions and support.
Nationwide, there are several states and cities that have early warning systems in place. Chicago Public Schools, for example, has used an evidence-based measurement called “freshmen on track” since 2008. This indicator was developed in partnership with the University of Chicago’s Consortium on School Research to predict whether students are likely to graduate based on their performance during freshman year. The district uses this information to provide pupils with interventions and additional support. Even outside of Chicago, there is a considerable amount of information and research on the impacts of early warning systems—including a 2016 study of three Ohio school districts.
As lawmakers consider a permanent change to Ohio’s graduation requirements, it’s important to remember that objectivity, comparability, and reliability matter. Alternative assessments and measures like culminating student experiences and GPAs might seem attractive, but they are burdensome for local districts, easily gameable, and of questionable rigor. The last point is perhaps the most important, as earning a diploma should signify that young people are truly ready to take their next step in life.
This spring’s school funding debates have revolved around the needs of poor students. Governor Mike DeWine has proposed a significant bump in state spending targeted at low-income students. And the House budget proposal, released just last week, adds even more to this pot of funds. The focus on supporting low-income students is commendable, as they often face the greatest barriers to success in college and career.
Given all the discussion about the needs of low-income students—and all the millions in funding being directed to support their education—it might surprise you learn that Ohio faces major challenges in identifying students who are in fact low-income. In a press conference unveiling his funding plan, Representative John Patterson said “we have been unable to fully define what ‘economically disadvantaged’ is.” Meanwhile, the DeWine plan bypasses the state’s own data on low-income pupils and instead targets aid based on federal Census data about childhood poverty, a noisy proxy, since it’s not based on actual headcounts of low-income students attending schools.
Without accurate data on low-income students, Ohio cannot efficiently target resources to students that need them the most. With only a few months until the General Assembly passes the state budget, there’s little chance that policymakers will be able to implement a different approach to counting low-income kids in this budget cycle. For now, that’s okay. But a critical, longer-term project for state leaders is to devise a more reliable approach to identifying low-income students.
This piece discusses the challenges Ohio faces in counting low-income, a.k.a. “economically disadvantaged” (ED), students, in light of policy changes from Washington, and it illustrates how they have inflated poverty rates across hundreds of schools. In a follow-up piece, I’ll consider how Ohio can transition to a different method of counting low-income students—known as “direct certification”—a shift that a few other states are already undertaking.
Policy shifts
Ohio has traditionally identified ED students based on their eligibility for free or reduced-priced lunch (FRPL). According to federal guidelines, students whose household incomes are at or below 130 percent of the federal poverty level can receive free meals at school, while those at or below 185 percent poverty are eligible for reduced price meals. For many years, FRPL counts have served as a reasonable proxy of economic disadvantage in districts and schools.
But significant changes to FRPL identification have occurred starting in 2010, when Congress enacted the Community Eligibility Provision (CEP). CEP allows certain high-poverty districts and schools to provide free meals to all students, regardless of their household income. To qualify for CEP, schools must have more than 40 percent of their students deemed eligible for free meals via direct certification—a process whereby low-income pupils are identified through their participation in means-tested programs like food stamps or flagged as in foster care, migrant, or homeless. Individual schools, or an entire district, may qualify for CEP based on their certification counts.
The admirable goal of CEP is to ensure that students in qualifying schools receive meals without stigma or administrative fuss. Yet because every child is able to receive free meals, CEP schools report 100 percent FRPL students—even though not everyone is actually eligible for subsidized meals. This, in turn, leads to inflated counts of ED students.
Inflated counts of low-income students
The table below highlights the predicament using five CEP schools in Ohio as examples. As you can see, all of these schools reported 100 percent ED rates in 2017–18 as a result of CEP participation. But their “true” rates of disadvantaged students are almost certainly much lower. Assuming similar enrollment patterns compared to the year prior to CEP adoption—a reasonable assumption given slightly declining childhood poverty rates—the schools’ actual ED enrollments are anywhere from 23 to 74 percentage points lower than what’s reported.
Table 1: Illustration of how the Community Eligibility Provision affects data on economically disadvantaged students
Source: Ohio Department of Education
Statistical imprecision might be of minimal concern if CEP schools were few and far between. But as Figure 1 shows, a sizeable portion of Ohio schools participate—nearly 1,000 out of roughly 3,300 public schools statewide. These schools are spread across Ohio’s less affluent cities, inner-ring suburbs, small towns, and rural areas. In fact, ninety-two of Ohio’s 610 districts have at least one CEP participating school, though a disproportionate number are located in big-city districts like Cleveland and Columbus.
Figure 1: Number of schools participating in the Community Eligibility Provision
Source: Ohio Department of Education. Note: This figure includes any institution that participates in CEP. The vast majority of the CEP schools are district or charter, with a handful—about fifty per year—being nonpublic schools or programs operated by regional ESCs, career-tech centers, or county boards of developmental disabilities. The first year in which ODE records CEP participation was 2012–13.
To get a rough sense of “miscounted” students statewide, I estimate the total number of non-ED students who are deemed ED by virtue of CEP. To do this, the 2011–12 ED rates—the year just before any Ohio school participated in CEP—serve as a proxy for schools’ “true” ED rates. Given the proximity of 2011–12 to the Great Recession, those data are likely higher than the ED rates today and produces a conservative estimate of the impact of CEP.
Assuming the FY 2012 ED rate more reliably captures current student poverty rates, ED enrollment in CEP schools is thus inflated by about 65,000 students—about the size of Cincinnati and Toledo districts combined. This has ramifications for state funding: Under the current system, higher poverty districts receive about $800 to $1,000 in additional funding per ED pupil. If CEP inflates ED counts by about 65,000 students, roughly $50 to $65 million per year is now being misdirected.
Table 2: Estimating how many non-disadvantaged students attend CEP schools based on FY 2012 data
Notes: Almost all CEP schools’ ED rates are reported as 100 percent (in ODE data as “>95”, which I impute as 100 percent), though forty-eight schools had rates below 95 percent, mostly between 90 and 95. A total of 837 schools are included in this analysis—161 CEP schools are excluded due to not being district or charter schools, not having ED data from FY 2012, or not having pupil enrollment data in FY 2018. The average ED rates across CEP schools for FY 2018 and 2012 are weighted by each school’s FY 2018 total enrollments.
* * *
Analysts like the Urban Institute’s Matthew Chingos and our own Checker Finn have raised concerns about the impact of unreliable poverty data on research and accountability systems. CEP’s effects are also felt in the realm of school funding. As state legislators seek to effectively target resources to students who need them most, they need a more reliable method of counting poor students. As we at Fordham and others have suggested, a promising way forward is to use direct certification data to track low-income students. Such a move would be fraught with challenges, but fortunately other states are also tackling the problem so Ohio wouldn’t be going it alone.
In a follow-up piece, I’ll consider how state policymakers could carefully make this transition in the years ahead. Stay tuned.
Starting in the early 2000s, with the implementation of No Child Left Behind, federal law required states to ensure that all public school teachers were “highly qualified.” That meant having a bachelor’s degree, full state certification, and subject-area mastery, often determined by a content test.
When ESSA was enacted in 2015, the highly qualified designation became a thing of the past. The new law requires states to set their own definition of a qualified teacher, and Ohio did that via Senate Bill 216 last year. The law requires public school teachers to be “properly certified or licensed,” which means they must possess one of the state’s approved teacher licenses.
For traditional districts, this change will have little impact. The vast majority of their educators enter the classroom with an Ohio teaching license thanks to one of the state’s many teacher preparation programs.
But it’s different for charter schools, many of which employ nontraditional teachers who have a bachelor’s degree but did not attend a conventional training program. Because these educators don’t enter the classroom the usual way, they tend to work under long-term substitute licenses until they meet state requirements for traditional or alternative licensure. Charters also use long-term substitute licenses to place teachers into hard-to-fill grade levels or subject areas and to increase the diversity of their workforce.
In short, charters rely a great deal on long-term substitute licenses. These are permitted by state law and were used extensively under No Child Left Behind’s “highly qualified” teacher framework, but the Ohio Department of Education has interpreted SB 216 to mean that teachers working under a long-term substitute license are not considered properly certified teachers. So unless something changes prior to the law going into effect in July, Ohio’s charter schools are about to lose a vital aspect of their hiring and recruitment flexibility.
Fortunately, the proposed state budget offers a simple fix. It eliminates the requirement for charter school teachers to be properly certified or licensed by the state. In addition to evidence being mixed on whether teacher licensure predicts classroom effectiveness, there are few reasons why that’s a good idea.
Autonomy in exchange for accountability
Buckeye charters are bound by the same accountability provisions as traditional district schools, including state report cards and federal sanctions. But they are also subject to additional measures, including an automatic closure law, which requires schools to permanently close after multiple years of poor performance, and the state’s sponsor evaluation system, which incentivizes authorizers to pay close attention to academic results. These tough accountability provisions ensure that persistently low-performing charter schools don’t stick around indefinitely, which is good news for kids, families, and communities. They also provide charter school leaders with a strong incentive to hire the very best teachers, regardless of whether they’ve gone through state regulatory hoops, because ineffective teaching lowers school ratings and risks closure. It therefore makes sense to grant them more hiring flexibility. (Keep in mind, too, that charters have more flexibility to swiftly dismiss poor-performing teachers without going through the bureaucratic procedures districts are subject to.) This is the grand bargain of charter schooling in action: stricter accountability in exchange for greater autonomy and flexibility in how schools build teams of great educators.
Competing for talent
Back in 2016, Fordham conducted a survey of leaders from the highest-performing Ohio charter schools. More than half noted that they “generally struggle” to find good teaching candidates, and that teacher pay is a main reason: 71 percent claimed that charter schools will always be at a serious disadvantage because they cannot afford to offer competitive salaries.
Low teacher pay is a direct result of the state’s inequitable charter school funding. Ohio charters, on average, receive 28 percent less funding than nearby school districts. The best way to address the issue is to fund charters equitably, and the governor and lawmakers are thankfully starting to work on that. But such a policy change is expensive and likely to be quite limited for the foreseeable future. In the meantime, state lawmakers should commit to leveling the recruitment playing field for charters as much as possible—and that means not restricting their hiring practices to those favored and dominated by traditional districts.
Attracting high-quality charter networks to Ohio
Ohio is home to top-notch homegrown charter networks like Breakthrough, Graham, KIPP, United Schools Network, and DECA. These schools do tremendous work for students and communities and are gradually expanding, but they only educate around 3 percent of the pupils in Ohio’s largest cities, and their forecasted growth won’t be enough to meet increasing demand for quality choice. Out-of-state networks with a résumé of excellence could help close the gap, but they won’t be tempted to open in Ohio unless the state becomes a more attractive market. Lawmakers could remedy this by funding charters more equitably and freeing them to recruit, hire, and train talented individuals who fit their missions. The schools would still be held to rigorous accountability measures, but these changes would make Ohio far more appealing to stellar nationwide networks.
***
To be clear, exempting charters from certification or licensure requirements wouldn’t result in a free for all. Teachers would still need college degrees, be subject to background checks, and, importantly, have to answer for the performance of their students on state tests and report cards. It merely maintains charters’ freedom to hire nontraditional teachers and assign them to a wider range of grade levels and subject areas. Skeptical lawmakers could even add safeguards, like requiring teachers to pass academic content licensure tests to prove their subject mastery. But limiting the flexibility of charter schools without hard evidence that it will benefit kids shouldn’t be an option.
Governors and legislative leaders in almost every state have made expanding and improving career and technical education (CTE) a top priority, yet the importance of quality data is often overlooked. The recent reauthorization of the Carl D. Perkins Career and Technical Education Act, which governs how states implement and expand access to CTE, offers a crucial opportunity to redesign related data systems. But to do so successfully, leaders must understand how they’re currently using CTE information and the barriers that prevent leveraging it in more effective ways.
To assist policymakers, Advance CTE, which represents state directors of career and technical education, partnered with a host of other organizations through the New Skills for Youth initiative to conduct a national survey of those directors. The response rate was impressive, with fifty-one state-level directors from forty-eight states, two territories, and Washington, D.C., responding. Based on these results, Advance CTE reports on the quality of data systems, identifies common challenges, and offers recommendations for improvement.
The report identifies four commonly used and broadly accepted indicators of career readiness at both the secondary and postsecondary level: completion of a work-based learning experience; attainment of a recognized postsecondary credential (including industry-recognized credentials and postsecondary degrees); completion of dual or concurrent enrollment; and successful transition to further education, employment, or the military. According to surveyed directors, nearly every state is able to collect individual data on these measures, though they are stronger at the secondary than postsecondary level. The majority of states are also able to disaggregate their data by career cluster, CTE program of study, and subgroups of students.
States use this information most frequently to inform technical assistance and program improvement efforts. For example, Idaho’s CTE Program Quality Initiative rewards excellent program performance by providing incentive funding. Most states also use data to inform state policy and planning, such as Arkansas, which referenced a Fordham study of post-program outcomes for career and technical education students to demonstrate to lawmakers the benefits of completing a sequence of high-quality CTE courses. States like Ohio also publicly report career readiness data via their accountability systems, though this is far more prevalent at the secondary level than the postsecondary. And although there are some outliers, most states seem reluctant to use this information for high-stakes decisions like linking funding with program quality. Less than half use it to transform career pathways.
The report identifies several reasons why states may not be fully leveraging their CTE data. One issue is that leaders don’t trust its quality. Many states rely on self-reported information to measure post-program outcomes. This includes surveys of former students or program participants, which are especially vulnerable to errors, misreporting, and low response rates. There are a large number of states that don’t actively validate and verify the accuracy of their career readiness measures. And many states have disparate and disconnected data systems, which makes it difficult to track young people during transitions from high school to postsecondary education and the workforce.
Improving the quality and reliability of CTE data is critical. The report recommends that states move away from self-reported information and toward more reliable sources. North Carolina does this by tracking the attainment of industry-recognized credentials through the institutions that grant them. States should also embed rigorous protocols for validating their data. Arkansas, for instance, requires schools to get employer validation when participants complete a work-based learning experience. And leaders should work to align definitions, measures, unique identifiers, and collection cycles across programs and disparate systems. Kentucky has done this since 2012, when it established an independent agency with authority over all education, workforce, and labor data.
As states implement the reauthorized Perkins Act, redesigning and aligning data systems will be vital. This reports offers valuable assistance.
SOURCE: “The State of Career Technical Education: Improving Data Quality and Effectiveness,” Advance CTE (April 2019).
Editor’s Note: Back in September 2018, awaiting the election of our next governor, we at the Fordham Institute began developing a set of policy proposals that we believe can lead to increased achievement and greater opportunities for Ohio students. This is one of those policy proposals.
With Mike DeWine sworn in as Ohio’s 70th governor, and with his administration now well underway, we are proud to roll out the full set of our education policy proposals. You can download the full document, titled Fulfilling the Readiness Promise: Twenty-five education policy ideas for Ohio, at this link, or you can access the individual policy proposals from the links provided here.
Proposal: Streamline the state funding formula by eliminating the Targeted Assistance, Capacity Aid, and the bonus funding components, and merge those funding streams into the Opportunity Grant.
Background: School funding has long been a joint state-local responsibility. In 2017, Ohio districts generated roughly $9 billion in local tax revenue, with wealthy districts able to raise more. Meanwhile, the state contributes $10 billion-plus and distributes more funds to Ohio’s neediest districts to compensate for their lower taxing capacities. To allocate the bulk of state aid, lawmakers first set a formula, or “base,” amount ($6,010 per student in FY 18). This base is then adjusted by the State Share Index (SSI), which accounts for districts’ income and property wealth. Together, the base amount and SSI determine districts’ Opportunity Grants, which are the core of Ohio’s foundation funding program (table 1). Additional components are layered on top, such as Targeted Assistance, Capacity Aid, various student-based categories, and bonus funds. Some of these additional components are essential to equitable state funding; for example, Ohio adds funds when schools serve special-education students or students with limited English proficiency. Other components, such as Targeted Assistance and Capacity Aid, are less necessary to achieving funding-equity goals yet increase the complexity of the funding system. Moreover, unlike the Opportunity Grant, which provides a certain amount of state aid to all districts, not everyone receives funds under Targeted Assistance and Capacity Aid. In 2017, ninety-three out of 610 districts were denied Targeted Assistance, and 308 were denied Capacity Aid.
Table 1: Main components of Ohio’s foundation funding program, traditional districts, FY 2017
Source: ODE, Foundation Settlement Report (FY 2017, June 2 Payment).
Proposal rationale: The Opportunity Grant, Capacity Aid, and Targeted Assistance have overlapping purposes: all aim to drive more dollars to districts with limited funding capacity. By collapsing these similar components into the core Opportunity Grant, the state would create a less complicated formula that is easier to predict, while also maintaining a focus on equity between districts. Centering attention on the Opportunity Grant would also allow the state to concentrate on its design and functionality, rather than having to review multiple calculations. Meanwhile, the bonus components spread too little funding across all districts to incentivize any real improvements; those dollars would be better allocated via the Opportunity Grant.
Cost: The proposal rolls existing dollars into the Opportunity Grant and, in isolation, would not cost additional state money. However, districts’ state funding levels would change, and hence the proposal would likely interact with caps and guarantees; fiscal modelling should be undertaken to predict costs.
Resources: For more on merging funding streams into the base funding, see the Foundation for Excellence in Education’s paper Student-Centered State Funding: A How-To Guide for State Policymakers (2017); this idea is also part of the school-funding proposals in Ohio House Bill 102 of the 132nd General Assembly. For a relatively broad description of the state funding system, see A Formula That Works: Five Ways to Strengthen School Funding in Ohio, a report written by Bellwether Education Partners’ Jennifer Schiess and colleagues and published by the Fordham Institute (2017). For detail on district-funding calculations, see the ODE report School Finance Payment Report (SFPR): Line by Line Explanation (2018).
Editor’s Note: Back in September 2018, awaiting the election of our next governor, we at the Fordham Institute began developing a set of policy proposals that we believe can lead to increased achievement and greater opportunities for Ohio students. This is one of those policy proposals.
With Mike DeWine sworn in as Ohio’s 70th governor, and with his administration now well underway, we are proud to roll out the full set of our education policy proposals. You can download the full document, titled Fulfilling the Readiness Promise: Twenty-five education policy ideas for Ohio, at this link, or you can access the individual policy proposals from the links provided here.
Proposal: Include satisfactory teacher-performance evaluations in two out of the past three years of teaching as a condition of receiving tenure and eliminate coursework requirements for tenure.
Background: Also known as “continuing-service status,” tenure provides teachers with job security until they resign or retire. Tenured teachers—those receiving “continuing contracts” that never expire—enjoy protections that include extensive hearing and appeals processes should a district seek to terminate their employment and are designated as “last out” within their area of instruction when districts need to reduce the size of their workforce (ORC 3319.16-17). In contrast, all other teachers are employed on “limited contracts,” with lengths up to five years. When these contracts expire, districts can terminate the employment relationship by nonrenewing the contract under a less onerous process. To be eligible for tenure, Ohio teachers must meet several conditions set forth in statute (ORC 3319.08). They include the following: being licensed for at least seven years, teaching in the district for at least three out of the past five years, and completing additional college coursework since initial licensing. Districts may deny tenure to eligible teachers, though this rarely happens in practice. In New York City, for example, one study found that almost 95 percent of teachers received tenure in the late 2000s. Given the significant job protections at stake—and perfunctory tenure reviews—states have moved to strengthen their tenure policies. Today, nineteen states (not including Ohio) now require evidence of classroom effectiveness as a condition of tenure; four states have repealed it altogether for newly hired teachers.
Proposal rationale: Most Ohio teachers are talented, hardworking professionals, but others are less-effective instructors, including both novice and tenured teachers. In fact, survey data indicate that most educators believe there are tenured teachers who underperform and whose employment should be reconsidered. Yet expensive, time-consuming dismissal procedures result in districts rarely attempting to remove low-performing tenured teachers from the classrooms. To better ensure that ineffective instructors are not rewarded with job protections, satisfactory evaluations should be required before districts grant tenure. Additionally, research has not shown a correlation between additional college coursework and higher student achievement, and this tenure requirement should be repealed. Moreover, this condition imposes out-of-pocket expenses on teachers (or schools, if they offer reimbursements) to take these courses.
Cost: No fiscal cost to the state.
Resources: For background on Ohio school employment laws, see the Ohio School Boards Association’s HR Reference Guide to School Law (2014); for information on other states’ tenure policies, see the National Council on Teacher Quality’s web page “Tenure”; for research on New York City’s tenure reforms, see Performance Screens for School Improvement, a report written by Susanna Loeb and colleagues and published by the Center for Education Policy Analysis (2014); for more on appeal processes, see David Griffith and Victoria McDougald’s report Undue Process: Why bad teachers in twenty-five diverse districts rarely get fired, published by the Fordham Institute (2016); and for survey data on teacher and administrator views of tenured teachers, see Patrick McGuinn’s Ringing the Bell for K–12 Teacher Tenure Reform, published by the Center for American Progress (2010).