Could robots be part of the answer to alleviating teacher shortages (and other staffing issues) in the future? Lots of folks think so, and new research indicates kids might already be primed to accept a non-human information source.
A group of researchers from Concordia University in Montréal, Canada, ran two experiments with groups of three- and five-year-old children, all recruited from a database of existing research participants. Families received gift cards and the children received certificates of merit for participating. Approximately half the sample was white, a quarter of the sample was mixed race, and the remainder consisted of various other ethnic groups (such as African, Asian, and South American). Nearly 60 percent of participants were from high-socioeconomic-status (SES) households (earning more than $100,000 annually), just over 26 percent were middle-SES households ($50,000–$100,000), and the remainder came from low-SES households. Because the experiment took place during Covid lockdowns, all trials occurred over Zoom, with parents providing minimal assistance to set up the video connection.
In the first experiment, a human and a small robot with humanoid features told children the names of three familiar objects (car, ball, cup). The robot called them by the correct terms; the human by familiar but incorrect terms (book, shoe, dog). Then the children were presented with three unfamiliar items (the top of a turkey baster, a roll of twine, and a silicone muffin container). Again, the human and the robot told children the names of those objects, but each used a different set of nonsense words to do so (“mido,” “dax,” etc.). The children were then asked what each of the unfamiliar objects was called, choosing from either the label offered by the robot or by the human. While the three-year-olds showed no clear preference for one “informant” over another, the five-year-olds were much more likely to side with the robot, which had given the correct names of the familiar objects, than the human, who hadn’t. The researchers say that both outcomes indicate that children are attributing similar characteristics to human and mechanical informants, although the three-year-olds are focused only on the fact that these similarities exist. By the age of five, however, children are also paying attention to the content coming from their informants and can, when the conditions are right, tell who is a competent informer and who is not.
The second experiment, with new participants in each group, was the same as the first except that the humanoid robot was replaced by an even smaller and less-anthropomorphized machine. The results were the same, showing that tech-informants don’t even have to look like humans for children to pay attention to, and in the case of the older children, believe them.
The upshot: There are likely capacity and efficiency benefits to be gained by embracing machine-based teaching and learning in certain contexts. But don’t fear the robotic dystopian classroom just yet. Take note that all mechanical and robotic instruction described here was created, programmed, and set in motion by people. And that the kids needed specific background knowledge as a prerequisite to the robots’s success—knowledge that came from parents, caregivers, libraries, and daycare staffers. So it seems a very long time before such machines evolve beyond helpers—however vital—to teachers and parents.
SOURCE: Anna-Elisabeth Baumann et al., “People Do Not Always Know Best: Preschoolers’ Trust in Social Robots,” Journal of Cognition and Development (March 2023).