Nexi the Robot
“In spite of the hardness and ruthlessness I thought I saw in his face, I got the impression that here was a man who could be relied upon when he had given his word, ” Neville Chamberlain. His first impression of Adolf Hitler can be described as an error in judgment.
Rarely do our own miscalculations result in tragedy, yet popular sentiment seems to hold that when it comes to truly trusting others, you just never know. Wolves in sheep’s clothing abound, and prudence demands skepticism. Whether we are deciding on a babysitter, a doctor, or a car, we try to not base our judgments on our first impressions. We ask for references, and look up reviews and blue book values. We know that “I’ve just got a good feeling about this” can be famous last words.
But this may not be a full portrayal of our capacity to judge others’ character. New research led by David DeSteno at Northeastern University suggests that when it comes to deciding whom to trust, our first impressions can be quite accurate. In fact, personality traits such as honesty and fairness are linked to specific kinds of nonverbal cues, and humans can pick up on these signals during interactions. According to these researchers we are like robots, programmed to move in particular ways if we are honest. To know who to trust, one simply needs to be able to read the patterns.
Psychologists (both professional and amateur) have thought for some time that cues such as facial features and expressions can give us important information about others’ internal states. Crow’s feet around the eyes distinguish a fake from an authentic smile. A raised upper lip and wrinkled nose reveals disgust. But there has been heated debate over the extent to which this information is reliable, as well as what kinds of cues represent the best source of information.
Given this uncertainty, to argue that the untrustworthy can be recognized according to certain tell-tale nonverbal cues is a strong claim. To defend this, DeSteno and his collaborators conducted two studies. The first asked the simple question of whether the presence of nonverbal information (vs. the absence) would significantly influence the accuracy of people’s character judgments. If so, then it would seem that humans are learning something about the character of others through their nonverbals.
The researchers had two participants meet and get to know each other for five minutes either face to face or through an online interaction. They then played an economic game in which player A has to make a decision about whether to cooperate with player B for less individual gain (but more collective gain) or to adopt a selfish strategy which leads to greater individual gain at the cost of collective gain. Will player A be selfish or cooperative? Can player B trust player A to look out for collective gain as opposed to individual gain? And, most importantly, if player B is asked to predict how player A will act, will he/she be more accurate if previously exposed to their nonverbal behavior? Will they have an accurate impression about this person’s character even before seeing how they act?
As predicted, the participants who interacted face-to-face, and therefore had access to nonverbal behavior, were significantly better predictors of how their partners would behave in this paradigm. The presence of nonverbal cues increased accuracy of predictions by an impressive 37 percent.
But what information were they picking up on? To find out, every interaction was video recorded with multiple cameras and then coded by independent research assistants for the presence of certain nonverbal behaviors (e.g. smiles, forward leans, crossed arms, face touches, etc…). Were there certain sets of nonverbal cues that led to more accurate predictions of others’ play? Indeed. Behold the formula to predicting trust: hand touch, face touch, arms crossed, lean away. The more often player A expressed this set of cues, the more selfishly they played in the economic game.
These are fascinating data. But, the researchers are quick to point out, they are correlational. There’s no way of knowing for sure whether these particular cues just happen to relate to selfishness in this context. What would constitute more compelling evidence that there is a trustworthiness signal? Ideally, we could program a human to either display the set of cues (or not) and then see how this influences judgments of trust. This would be strong experimental evidence. But you can’t program humans — after all, we’re not robots. Luckily for these researchers, robots are robots.
Meet Nexi, the newest creation of the Personal Robots Group at MIT. Nexi is a social robot – able to express a range of emotions and expressions in order to meaningfully interact with humans (in a way that does not creep them out). When turned off, Nexi isn’t much more than a big-eyed hunk of metal with wheels for legs. But flip the switch and Nexi comes to life with human-like dexterity and mannerisms that compel us to see a mind in the machine.
Conveniently for the researchers, Nexi can be programmed to exhibit specific sets of behavioral patterns during interactions with humans. The perfect experimental manipulation. Will participants who interact with Nexi trust the robot less when it exhibits the set of nonverbal cues identified in Study 1? That is, will participants judgments of and behavior towards Nexi in the economic game be influenced by the robots expression of these (vs. other) cues? Yes indeed. Participants trusted Nexi significantly less when she was programmed with the human nonverbal signals of selfishness. And it’s not that participants liked Nexi less when exposed to those nonverbals – they liked the robot just as much as when the cues were absent. Their presence was exclusively related to participants’ trust.
This line of research vindicates our instincts about those with whom we interact. When we “just have a feeling” about someone, we can be right. So, then, what to make of Neville Chamberlain? Was he particularly bad at reading nonverbal cues? The picture complicates when we interact with individuals who are motivated to conceal their true inclinations. Importantly, the participants in this study did not know they would be playing an economic game with each other while interacting. They had no reason to try and deceive their partners. This is not always the case in the real world. What would have happened if participants knew they would be playing the game before their interaction? Surely, the untrustworthy would have attempted to appear trustworthy. How successful would they have been? Would their partners still have been able to identify them in spite of their attempts? What if they knew which nonverbal cues to avoid displaying? The uncomfortable implication of these findings is that the more we uncover about the dynamics of trust, the more we learn about how to effectively deceive.
Photo credit: Technology Evolution