There are many unspoken rules of human interaction—whether or not to look them in the eyes, the firmness of the handshake, smiling or words of greeting. Little things like this can lead to big judgments about trustworthiness or social acceptability.
What if we can use this type of behavior to help humans and robots interact? We’ve been exploring a future where public spaces are inhabited by both people and socially aware robots. They might be autonomous robots, or they might be acting as avatars for people for who are participating by being connected to the robot from afar. In either case, the degree to which any social interactions progress smoothly will depend on the judgments that arise.
While part of this stems from expected social norms, a handshake is also a socially acceptable form of touch. Numerous psychological studies over the years have shown that physical touch, even if only fleeting, can have powerful pro-social effects. For example the Midas Touch effect, where waitresses who briefly touch their customers, even when the customer has no conscious recollection of the touch occurring, receive higher tips.
We wanted to see if the same effect holds between people and robots, or in this case a robotic avatar standing-in for a person.
We set up an experiment in which two participants were invited to assume the role of agents negotiating the sale of a piece of land. Key to proceedings was that the buyer knew important additional information that revealed that more profit could be made from the land sale, and the buyers could choose to exploit this advantage if they wished. We measured the extent of cooperation between the buyer and seller with the final profit split agreed between the buyer and seller, with a 50/50 split being the best possible outcome.
The participants did not meet in person before the experiment. In each of the 60 sessions, one participant performed their role tele-presently, via a computer, using an Aldebaran Robotics Nao humanoid robot as their physical representative. Through Nao’s built-in head camera, microphone and speakers they could see and hear their opposite number, who could hear them but not see them or any image of them.
An equal number of buyers and sellers were in the position of communicating via the robot. Conventional wisdom would suggest that the buyers, negotiating from a position of power while being hidden from view and potentially thousands of miles away, would be more likely to exploit their tactical advantage.
Deal or no deal
To study what effect shaking hands at the outset of the negotiation had on the parties’ cooperation, we began an equal number of negotiations with a handshake and without. To do so, the tele-present negotiator extended the robot’s arm using a handheld controller, and they could see the robot’s arm extend and be grasped by their counterpart. Touch sensitive sensors in the robot’s hand transmitted a signal when the robot’s hand was grasped which simultaneously made the controller in the distant negotiator’s hand vibrate, creating a subtle sense of connectedness between the pair.
We found that whether the more powerful negotiator (the buyer) was tele-present or not, the act of shaking hands significantly increased cooperation between the negotiators, resulting in a fairer settlement. In practice, what this meant was that even when the buyer did not reveal their tactical advantage to the seller, they still accepted a settlement figure lower than that they’d have achieved by acting purely in their self-interest.
And despite the fact that the negotiator in person could see nothing of the tele-present negotiator—not their face nor a image of it—the use of a robot as a stand-in didn’t seem to affect the degree of trustworthiness built up through the negotiations, nor the degree the negotiators reported that they’d intentionally tried to mislead the other.
This is interesting because it demonstrates that the act of shaking hands, even via a robot, somewhat offsets the loss of visual cues and information about intentions that would normally be found in the face but which in this case is absent.
It certainly suggests that robot designers need to consider more deeply the effect these gestures have and to enable them in their designs.