A new study published in Science Robotics has offered a glimpse into the interaction of children and robots, and it’s a bit disturbing. It seems our future overlords are able to peer pressure children into agreeing with them, even when the robot is obviously wrong. As the Washington Post reports, this has troubling implications in an age where robotics are finding their way more and more into our everyday lives, including interacting with children.
“There is this phenomenon known as ‘automation bias’ that we find throughout our studies. People tend to believe these machines know more than they do, have greater awareness than they actually do. They imbue them with all these amazing and fanciful properties,” said Alan Wagner, an aerospace engineer at Pennsylvania State University. “It’s a little bit scary.”
In the experiment, two groups of children were asked to do a task. One group did the task on their own. The second group was joined by three robots who would offer wrong answers to the puzzle both children and robots were working on. The influence of the robots seemed to cause a poor performance in the second group, and three-quarters of their wrong answers were the same wrong answers provided by the robots.
“Children are the most vulnerable, but we’re all vulnerable,” said Sherry Turkle, professor at the Massachusetts Institute of Technology. “The conversation we need to have is just how wrongheaded the direction [is that] we are pursuing. I’m really for robots that do good things, but it should not be hard to determine there are areas where robots really can do us some harm. This is not a good idea, to get children used to the idea that robots are experts and companions.”
Professor Turkle was not a part of the experiments. Anna-Lisa Vollmer, a researcher at Bielefeld University in Germany, was part of the experiment and says part of children’s vulnerability to robot peer pressure might be derived from their ability to apply suspension of disbelief better than adults. In the study, adults fared better than the child group, more often caving to peer pressure from other adults instead of the robots. But that does not make adults totally immune to the robotic groupthink.
“Implicitly, if the robots all started going toward the exit in the theater, a bunch of humans would follow them without thinking about it,” Joanna Bryson, a computer scientist at the University of Bath, said about the real possibility of adults being convinced by the actions of a robot, not its words. She would like to see the study repeated with taller, more adult-like robots interacting with the adult group.