The little dinosaur robot blinks its big, blue eyes and stretches its neck. It tilts its head toward a human, who responds with a pat. The dinosaur closes its eyes in apparent contentment. But when it suddenly freezes, the human flips it over to check its batteries.

Barack Obama meeting the Honda robot Asimo Barack Obama watches Asimo, a robot made by Honda (State Department photo by William Ng / public domain).

Why we respond to social robots the way we do—sometimes treating them as real beings, other times recognizing that they are machinery—is the central question of new research by Stanford psychologist Herbert Clark, Albert Ray Lang Professor of Psychology, Emeritus, in the School of Humanities and Sciences at Stanford University, and his longtime collaborator Kerstin Fischer, professor of language and technology interaction at the University of Southern Denmark.

“It’s puzzling how socially people respond to things that are actually machines,” Fischer said. “There’s lots of emotionality and sociality in interacting with a robot. How is it that these machines can be dealt with as if they were living people?”

Advertisement X

Clark and Fischer argue that people interpret social robots, which are designed to interact with humans, as depictions of characters—similar to puppets, stage actors, and ventriloquist dummies.

Their view is controversial. Clark and Fischer’s paper appeared recently in the journal Behavioral and Brain Sciences alongside open peer commentary, in which dozens of researchers in multiple disciplines from around the world reacted to their conclusions.

The discussion matters in a world where humans are increasingly encountering robots, and those robots are increasing in their abilities. Understanding how and why people interact socially with robots could guide how future robots are designed, as well as shape how we interpret people’s responses to those robots.

The basics of the depiction model

A person viewing Michelangelo’s statue of David knows it’s a chunk of carved marble. But the viewer simultaneously understands it as a depiction of the biblical character preparing for the battle against Goliath.

In the same way, Clark and Fischer said, people are aware that social robots are made of wires and sensors shaped into a depiction of a character like a little dinosaur, a pet dog, or a human caretaker or tutor. But when people interact with these robots, most are willing to treat them as the characters they depict.

“We understand what an image is, we understand what a drawing is, we understand what a movie is, and therefore we understand what a robot is, because we construct the robot’s character in exactly the same way we construct the characters we see depicted in a drawing or movie,” Fischer said.

People also recognize that the characters are specifically designed to interact with humans, Clark said.

“People do understand that these robots are ultimately the responsibility of the people who designed them and are working them,” he said.

This knowledge comes into play when something goes wrong, like a robot sharing bad information or injuring someone. People don’t hold the robot responsible. They blame the owner or operator—re-emphasizing their understanding of the object and the character.

Another view from a Stanford colleague

One of the commentaries that expands on the depiction model comes from another Stanford researcher, Byron Reeves, the Paul C. Edwards Professor of Communication in the School of Humanities and Sciences at Stanford University, who studies how people psychologically process media characters and avatars, including robots.

Reeves argues that while people sometimes treat robots as depictions, they can also have quick natural responses to robots, with thought coming later—the same way you might jump in fright when a dinosaur appears on screen in a movie, and then remind yourself it isn’t real.

“It’s the really fast-thinking stuff. I mean, milliseconds fast,” Reeves said. “Now, in fairness, (Clark) thinks that his depiction model applies to those quick responses as well. I don’t see a good fit with their main concepts. Depiction emphasizes words like ‘appreciation’ and ‘interpretation’ and ‘imagination,’ and they just seem slower, more thoughtful. They’re kind of literary responses: ‘I’ll actively pretend this is real because that will be entertaining.’ ”

Clark and Fischer note in their response to the commentaries that people’s immersion in the story world of a novel, for instance, “is continuous; they don’t have to re-immerse themselves with each new sentence or paragraph. The same is true with social robots. People don’t need extra ‘time and effort’ for ‘reflection’ at each new step of their interaction with a robot.”

“There’s lots of emotionality and sociality in interacting with a robot”
―Kerstin Fischer, Ph.D.

They argue that understanding depictions is immediate and fast, and even children understand them from a very young age.

“I have a granddaughter who is now six, but when she was one and a half or two, she was already able to take dolls and treat them as characters,” Clark said.

Reeves said his model is more likely to predict how social robotics technology will progress in the future.

“The dinosaurs in movies are better and better, and juicier and juicier, and scarier and scarier,” he said. “I think robots will go there as well.”

Lessons for designers and interactors

While humans may treat social robots like real people or animals, the technology is a long way from replicating actual human interaction, Clark and Fischer said.

“It takes real skill for people to communicate effectively, even with simple things like spatial descriptions,” Clark said. “People know precisely how to combine descriptions, gestures, eye gaze, and mutual attention in telling people where things are. Well, to get robots to be equally skillful—even on a simple thing like that—will be really, really hard.”

Even advanced social robots are extremely limited. But when people interpret them as characters, they’re prone to overestimate their capabilities.

“If you have a robot math tutor, you still cannot leave your kid alone with the robot. Why? Because it won’t notice when the child is choking or climbing the balcony or doing something else,” Fischer said.

This type of overestimation also causes problems with other popular but limited technologies, such as voice assistants and AI chatbots. People who design robots and similar technologies should make the constraints more transparent to users, Clark said.

Clark and Fischer said their model not only recognizes the level of work that goes into designing social robots, but also encourages a positive view of the people who interact with them. Under the depiction model, a person who treats the little dinosaur robot like a pet is behaving normally.

“Our model shows respect for the people who interact with the robots in social ways,” Fischer said. “We don’t need to assume they are lonely or irrational or confused, or deficient in any way.”

This article was originally published on Stanford News. Read the original article.

GreaterGood Tiny Logo Greater Good wants to know: Do you think this article will influence your opinions or behavior?

You May Also Enjoy

Comments

blog comments powered by Disqus