The Trends in International Mathematics and Science Study (TIMSS) and the Program for International Student Assessment (PISA) are arguably the most important international tests in education. Both have been administered for decades in dozens of countries. Each new set of student outcomes is tracked, analyzed, and endlessly written about. Because they are distinct assessments whose data often appear to provide contradictory messages about student academic health, most advocates and policymakers assume that TIMSS and PISA measure different types of educational inputs and outcomes. But a trio of researchers from Umeå University in Sweden recently pared down the two tests’ data to their most basic commonalities and analyzed the results to find some correspondence among the contradictions.
The researchers utilize data from the 2015 administration of both tests, which is the most recent year in which they coincided (TIMSS tests are given every four years, PISA every three years). Additionally, they had a tested topic in common that year. While TIMSS assesses the same subjects in every round (math and science), PISA varies one of its three subjects in each round (also including reading, financial literacy, and a number of other subjects), and most students are assessed in only two of the subjects in any given administration. Thus there isn’t much overlap in tested topics. However, both exams featured science questions in 2015, and this was synchronicity enough for the researchers. They further enhance the similarities by focusing on test takers in Sweden and Norway only, as the two countries’ education systems were felt to be similar enough to reduce noisy input variables.
TIMMS science results in 2015 came from 4,800 Swedish eighth graders and 4,100 Norwegian ninth graders; PISA science results arose from a nearly-even split of approximately 5,460 fifteen-year-olds in each country. Each testing round also includes a number of questionnaires (including surveys of parents, students, school leaders, and teachers), but these vary greatly between the two test types. School-level and student-level surveys were common between them and thus were the only ones analyzed for this report. The researchers whittled the survey data further down into a set of factors that could influence students’ academic performance. Home-related factors common to both tests/surveys include such things as number of books at home and the study resources available at home (desk, laptop, internet, etc.). School-related factors common to both tests/surveys include such things as staffing levels and homework help offerings.
Overall, the number of books reported in the home and an aggregate measure of school resources were the only common factors significantly positively related to science achievement on both tests in both countries. A mélange of test-specific and country-specific factors showed minimal levels of significance to student achievement (think rural versus urban schools or native versus immigrant parents) on their respective tests and students overall. However, breaking down the results by country, test, and school, various factors were found to be significant in limited contexts. For just two examples: Overall school staffing levels (including a measure of teacher quality) were positively and significantly associated with student performance in low-performing schools, but not significant in high-performing schools; and the “discipline of students” factor (think, school culture) was only a significant influence on Swedish students.
So does this smörgåsbord of data simply reinforce the initial idea that TIMSS and PISA are too different in their intent and construction to render any commonalities, even when served up on the same platter like this? The report’s authors say no: TIMSS and PISA assessments provide “partially complementary information.” The common thread is that school- and home-related inputs are correlated with boosts to (or hindering, if missing) student outcomes, even if the determinations of what inputs matter, how important they are, and when they exert influence are, as yet, unclear. Further research, they conclude, should work to more clearly define these input factors—what, for example, is “having books in the home” a proxy for—and examine their impacts on specific academic outcomes with additional clarity and detail.
SOURCE: Inga Laukaityte, Ewa Rolfsman, and Marie Wiberg, “TIMSS vs. PISA: what can they tell us about student success?—a comparison of Swedish and Norwegian TIMSS and PISA 2015 results with a focus on school factors,” Frontiers in Education (February 2024).