In the Summer 2007 issue of Greater Good, I wrote about how scientists and engineers are trying to build machines who can read and reflect human emotions. (Unfortunately, the article is not available online.) In researching the piece, I was fascinated to learn that there is also quite a bit of investigation into how humans develop emotional attachments to machines, which flies in the face of the common idea that a mechanical world is intrinsically a loveless one. MIT researcher Cory Kidd and Stanford's Cliff Nass, for example, have both found that people can interact warmly with machines and even get some emotional satisfaction from the relationship.

This is not an abstract concern: Roboticists in Japan and Korea are racing to develop androids who are capable of caring for their nations' growing populations of the elderly and terminally ill, and they're making slow progress. Meanwhile, people all over the world are interacting more and more with machines, and communicating with other people through machines. (In fact, I'm communicating with you right now through my computer and the Internet.)

A new Georgia Tech study looked at how people felt about their robot vacuum cleaners. The Daily Galaxy reports:

Advertisement X

The researchers found people often give them nicknames, worry about them when they signal for help and sometimes even dote on them like pets. They even found that people were fond of their robots even when they weren't working properly.

"They're more willing to work with a robot that does have issues because they really, really like it," said Beki Grinter, an associate professor at the school's College of Computing. "It sort of begins to address more concerns: If we can design things that are somewhat emotionally engaging, it doesn't have to be as reliable."

Grinter become interested in individuals relationships with the devices after she saw online pictures of people dressing up their Roombas, disc-shaped, self-directed vacuums made by iRobot Corp…

Grinter enlisted Ph.D. student Ja Young Sung, who studies "emotional design" – the theory that certain types of design can influence consumers to become emotionally attached – to help figure out what was going on.

First, Sung Young monitored an online forum devoted to Roombas, which revealed people who named them and traveled with them and one owner who was excited to introduce the machine to his parents.

Others reported how they had "Roomba-ized" their homes so the robot could have an easier time traversing floors. Some bought new flooring, or pre-cleaned the floors to make things easier for their robot friend.

"I was blown away," said Young Sung. "Some Roombas break a lot, they still have functional problems. But people are willing to make that effort because they love their robot enough."

Based on feedback I got on my article, I know that some readers found the prospect of an emotional, empathic android to be grim–and I've read some real criticisms of designing robots to take care of people who can't take care of themselves. It is, in many ways, the ultimate in outsourcing, and brings to my mind this comment by the British writer J.G. Ballard: "I think we are subcontracting our moral universe to that of the machines." Ballard meant that there is something in technology that seems to encourage unreason and moral irresponsibility, even as machines themselves embody reason and reliability–they are, after all, the fruit of centuries of scientific enlightenment.

It's a compelling, alarming notion, but empirical research like the Georgia Tech study seems to indicate that, in fact, we are capable of humanizing even our most inhuman creations and investing them with the gentlest emotions. By definition, an empathic android would be a reflection, not negation, of our humanity.

Thus, I think, the prospect of emotional machines remains very ambiguous, but not automatically negative. Many science-fiction films (e.g., the Terminator and Matrix franchises) have depicted intelligent machines as ruthless, aggressive monsters whose first desire is to take over the world and wipe out humanity–but as our knowledge grows, there are many reasons to think that it might go the other way. Intelligent and emotional machines may also share the compassion and altruism that all humans are capable of. I suspect that how we view our machines actually depends a great deal on how we choose to view humanity: Are we violent and aggressive by nature, or are we better defined by cooperation and caring?

GreaterGood Tiny Logo Greater Good wants to know: Do you think this article will influence your opinions or behavior?

You May Also Enjoy

Comments

blog comments powered by Disqus