Helping parents evaluate their educational options is an important component to effective implementation of school choice policies. Giving them more and better information from which to compare options will likely become essential if choice continues to proliferate. A new paper looks at one common information source—user reviews—to see if it is fit for the task.
Analysts from The National Center for Research on Education Access and Choice (REACH) used a combination of artificial intelligence (natural language processing) and qualitative analysis to study the content and usefulness of user reviews posted on the search tool GreatSchools. Their data include 50 million words of text across more than 600,000 reviews written about 84,000 schools from 2009 to 2019. Their methodology included cleaning and standardizing data (correcting misspellings, removing extraneous words and punctuation marks, making sure every word was recognizable in English, etc.), categorizing comments based on frequently-used words (physical environment, curriculum quality, school staff, etc.), and developing a hierarchy of “usefulness” based on the specificity of the words used. The lattermost component is where AI was particularly helpful.
On the upside, the text reviews on GreatSchools reflect actual user experiences of the schools, making them highly relevant to families considering enrollment. The most common topics discussed in text reviews were overall quality, school staff, and school culture—all very important. User types also covered all the bases—parents (the most common posters), students, teachers, and principals—allowing for an important variety of perspectives. Additionally, since they are provided in response to open-ended requests for comment, text reviews have the potential to provide rich information that can further illuminate common statistical data like test scores and student-teacher ratios.
However, they find that the broad promise of GreatSchools’ text reviews is not generally realized. First, reviews come from a small group of self-selected stakeholders who post on platforms like these. Second is a tendency toward evocative but not particularly informative language, indicative of users highly motivated to either praise or condemn the schools. Reviews focusing on overall quality, perhaps the first thing a prospective parent might read, are a case in point: often vague with hyperbolic words like “wonderful” or “awful,” but providing little useful detail about a school’s specific qualities, strengths, and weaknesses. The analysts also assert that there can be too many reviews for busy parents to read, and that reviews are not organized in any way which could make it easier for readers to quickly determine whether any given review is relevant to them. For example, a long and comprehensive review of a K–8 school could cover important details about staff quality or extracurriculars but turn out to apply only to middle-school-age students, while the reader was interested in elementary-level details.
Interestingly, text reviews of charter schools tended to be longer than those of traditional district schools. Reviews of both charter and private schools included more information about school-level features—including specific differentiators between themselves and traditional schools—and less about school staff. Charter school reviews included more details about instruction and learning, whereas private school reviews included less about physical environment than reviews of traditional district schools.
Finally, the researchers used regression analysis to compare the focus and tenor of text reviews with the star ratings those same users assigned to the schools, broken down by school and user type. In general, reviewers who discussed a school’s overall quality and resources in text reviews ended up giving those schools high ratings, while those who discussed physical environment gave those schools low ratings. Teachers overall gave better star ratings when their text reviews discussed school culture, and parents gave better ratings when their text reviews discussed instruction and learning. High star ratings for traditional district schools (from all user types) were associated more with text reviews focusing on school staff and school-level features, but those same categories were associated with lower star ratings for private schools. All of this taken together reinforces the notion that reviewers base their comments on individual educational values and that two parents (or students or teachers) can experience the exact same school in very different ways.
The report suggests several options for improving the value and functionality of user reviews on GreatSchools and other platforms. They include more closed-ended questions (which the authors note GreatSchools has begun implementing in recent years) and the use of AI tools to collate, prioritize, and summarize large quantities of reviews. These are good ideas, providing AI summaries don’t take precedence over access to the full slate of reviews, and would definitely improve GreatSchools’ usefulness. Research shows that parents investigating schools already get their information from many sources, some of them informal, and that singular details (which may change from year to year and child to child) can be the deciding factor. Increasing the clarity and depth of information sources can only help the cause.
SOURCE: Douglas N. Harris et al., “Can User Reviews Like Those on GreatSchools Improve Information for Schooling Choices?” National Center for Research on Education Access and Choice (November 2024).