Three humans and a robot form a team and start playing a game together. No, this isn’t the beginning of a joke, it’s the premise of a fascinating new study just released by Yale University.
Researchers were interested to see how the robot’s actions and statements would influence the three humans’ interactions among one another. They discovered that when the robot wasn’t afraid to admit it had made a mistake, this outward showing of vulnerability led to more open communication between the people involved as well.
“Sorry, guys, I made the mistake this round,” the robot says. “I know it may be hard to believe, but robots make mistakes too.”
It was vulnerable statements like this that led human participants to talk more openly with each other, and report an overall more positive experience while playing the game. Other participants were provided a robot that either didn’t say anything at all, or spoke in more neutral terms. These study subjects didn’t talk nearly as much with each other and said they had a worse time in general.
“We know that robots can influence the behavior of humans they interact with directly, but how robots affect the way humans engage with each other is less well understood,” says lead study author Margaret L. Traeger, a Ph.D. candidate in sociology at the Yale Institute for Network Science (YINS), in a press release. “Our study shows that robots can affect human-to-human interactions.
Of course, no one is walking around with their own literal humanoid robot just yet, but AI and smart assistants have quickly become a mundane part of millions of lives. Amazon’s Alexa isn’t exactly quick to acknowledge a mistake, but this research makes a compelling argument that perhaps such home assistants should adopt a more modest approach.
There’s really no telling to what extent the proliferation of AI and robotic assistants will have on people’s day-to-day actions and interactions in years to come, but it would certainly be naive to think there won’t be some kind of effect. For example, we’ve all become accustomed to shouting instructions at smart assistants like Alexa or Siri; at a certain point it really isn’t all that outlandish to think that an individual who grew up shouting instructions coldly at robotic assistants would naturally take the same approach regarding human-to-human interactions.
This study just goes to show that if we design the robots of the future to convey some uniquely human emotions (vulnerability, compassion, regret), it will probably go a long way towards helping mankind retain those traits in the decades and centuries to come.
“In this case,” Traeger adds, “we show that robots can help people communicate more effectively as a team.”
A total of 153 people took part in the study. All the participants were divided into 51 different groups with each consisting of three people and one robot. Then, all the groups played a tablet game in which they all had to work together to build efficient railroad routes over the course of 30 rounds. However, those 51 playing groups were randomly assigned one of three different robots. Some played with a silent bot, and others had a robot that only made cold, statistical statements about the game. Meanwhile, a third portion were given a distinctly more human like robot that expressed vulnerability by sharing stories, admitting mistakes, and even making jokes.
Across all three robot variations, the bots were programmed to make at least a few mistakes over the course of the game.
Participants who played with the more human-like robots ended up talking to each other twice as much as other players and said they had a better time in general. More specifically, human players had more conversations immediately following a vulnerable robotic statement in comparison to neutral phrases. Conversations were also more “evenly distributed” within the vulnerable playing groups, indicating more cohesion among human teammates.
“We are interested in how society will change as we add forms of artificial intelligence to our midst,” comments Nicholas A. Christakis, Sterling Professor of Social and Natural Science. “As we create hybrid social systems of humans and machines, we need to evaluate how to program the robotic agents so that they do not corrode how we treat each other.”
Besides simple games and social interactions, researchers believe robotics and automation within working environments can have a significant effect on human workers’ interactions and perceptions as well.
“Imagine a robot in a factory whose task is to distribute parts to workers on an assembly line,” explains study co-author Sarah Strohkorb Sebo, a Ph.D. candidate in the Department of Computer Science. “If it hands all the pieces to one person, it can create an awkward social environment in which the other workers question whether the robot believes they’re inferior at the task. Our findings can inform the design of robots that promote social engagement, balanced participation, and positive experiences for people working in teams.”
It’s ironic, the prevailing narrative these days is that technology has been detrimental to human interaction over the past 10 years. Perhaps all we need to look up from our screens more often is a smartphone that acknowledges when it’s made a mistake.
The full study can be found here, published in Proceedings of the National Academy of Science.
Via TheLadders.com