Let’s assume that nobody is going to end up taking state assessments or end-of-course exams this spring. One way or another, everyone will be waived from those federal obligations and their state-imposed counterparts, mainly at the high-school level. The College Board and ACT are striving to improvise, reschedule, and reformat their volitional tests, such as AP and SAT, and some—maybe a lot—of that will continue for purposes of college admissions and credit. But it’s unlikely that the states, districts, and schools that have been requiring participation in those tests will be able to do so. Nor will they be able to administer third-grade “reading guarantee” tests or the myriad other universal tests that have been playing a central role in results-based accountability for schools, districts, individual students, and sometimes for their teachers.
I agree with Jay Mathews—and, it seems, Diane Ravitch—that “the tests will be back,” presumably next year. They are, as Jay writes, “deeply woven” into the culture of American education, plus mandated by laws that aren’t likely to change. But we obviously can’t hold schools accountable for results over the next three months. Too much is out of their control. This leads to the question: What steps do we hope they—and their districts, networks and states—will take? And are there ways to encourage transparency, if not true accountability, for taking those steps?
Yes, it’s mostly about process, not outcomes, and that always makes me uncomfortable. But everything is uncomfortable now, and schools, districts, and charter networks can fairly be expected to grapple with the discomfort, not just endure it. They’re not off the hook just because it’s a plague year. Arguably, they’re more responsible than ever as the federal government eases regulations and offers waivers. Besides, there might be things we learn during this time of improvisation that could improve our overall approach to accountability when the tests return.
Once schools, districts, and networks manage to deploy online assignments—which so far is happening at radically different velocities around the country, what about simple counts of how many kids are actually completing them? What steps are schools and teachers taking to contact, encourage and assist those who aren’t? It’s easy to conjure revealing metrics that give clues, if not to how well kids are learning, at least to how hard schools are working to see that they try. While we’re at it, what about teacher participation in these efforts? As we know from ESSA-plan struggles, attendance and absenteeism by both students and teachers can be important clues to a school’s organizational health. In the virtual world, of course, a complicating factor is kids and adults who lack access to the requisite technology or don’t know how to operate it. But South Carolina is dispatching Wi-Fi enabled buses to function as internet “hotspots” in low-income neighborhoods, and districts could be shipping laptops or tablets and instructions to the homes of those without. That’s too hefty a price tag for many districts—but an excellent use for whatever federal stimulus dollars end up in the K–12 realm.
How about efforts by schools to schedule IEP conferences (virtual, phone, conference call, Zoom, etc.) with parents of special-needs pupils? How many such conferences occur in a month, how many IEPs get modified to deal with the changed circumstance—and how many such students begin to get the supplemental or remedial instruction that they may need? (And what about something analogous for ELL youngsters?)
How satisfied are students, parents, and teachers with what the district, network, or school is providing them by way of quality opportunities during this trying time? If “school-climate” surveys make sense when school is physically in session, why not their virtual equivalent now?
What about planning for tomorrow? What steps are schools taking to do better at all this in May than in April? What are their plans for summer learning? (Two sets of plans at this point, methinks, one for physical summer school, another for summer online, including both catch-up and move-ahead.) What about planning for how to resume regular school—God willing—in the fall? John Bailey predicts more closures next year, so two sets of plans are needed here, too. And the planner will need to factor in whatever adjustments are dictated by the present shutdown and whatever improvements can be made based on the experience that we’re all living through.
That’s not the end of it, either. Not every test is entirely out of the question, even now. The kinds that are routinely taken online, whether NWEA’s MAP tests or the Smarter-Balanced adaptive assessments, don’t have to be taken in a schoolroom. Particularly when they’re the sort of test, such as MAP usually is, that’s given several times a year and thus shows growth within the year, not only do practitioners get valuable formative feedback from the results but districts (or charter networks) can also use them to gauge the gains that are (or aren’t) being made by students in a particular school, grade level, or subgroup.
Places with a district-wide or network-wide curricula can also devise their own on-line formative and summative assessments if they haven’t already done so, keyed to their curricula. Obviously these, too, must be amenable to being taken at home. If newly devised, they won’t yield growth or effectiveness data for schools, but they’ll at least provide an end-of-year status report for schools and subgroups, against which to plan what needs to happen over the summer and next year.
Beyond online testing are many rich and revealing—and educationally beneficial—forms of student portfolios, projects, research papers, book reports, and other evidence of learning and accomplishment. In normal times, these are usually deployed—and evaluated—mainly by individual teachers. But if they’re submitted online, they could easily be evaluated by others, too: other teachers, math specialists, curriculum directors, and more. Start by asking simply whether such student work is being turned in? Is every school in the district or network tabulating receipt of such assignments by X percent of its students, then cross-tabbing by grade level and student group and if not why not and what’s being done about it? How about teachers evaluating each other’s students’ work and somebody monitoring and moderating for consistency of assignment difficulty and evaluative criteria?
It’s likely too late in the year, and school and district circumstances are too different, for states to mandate much (perhaps any) of this. But motivated district and network leaders, as well as individual school heads, could make a lot of it happen, and back-office staff could remotely tabulate and analyze the data.
Accountability this year will be different than any time in the past quarter century. But it isn’t dead. It’s simply engaged in social distancing.