Yesterday Greater Good editor-in-chief Jason Marsh reported on some of the highlights from Saturday’s Being Human conference in San Francisco. Throughout this week, we’ll continue to highlight topics and speakers featured at the conference. Today we present an interview with Yale psychologist Laurie Santos, whose Being Human talk covered “The Evolution of Irrationality.”
“If we really want to know about being human, it might actually make sense to figure out what it’s like to be a monkey.”
So said Laurie Santos, an associate professor of psychology at Yale, toward the start of her presentation at the Being Human conference this past weekend.
Santos practices what she preaches: In her role as the director of the Comparative Cognition Laboratory at Yale, she has run a series of studies that try to get inside the heads of monkeys. Some of her research has explored the roots of irrational decision making. For instance, are monkeys, like humans, so loss-averse that they try to avoid losing a small amount of money even if it means risking an even greater sum? (They are.)
But her work has also looked at whether non-human primates demonstrate kind, helpful—or “pro-social”—behavior in ways similar to humans. There she has found evidence that capuchin monkeys are willing to share food with a fellow monkey, even when their generosity comes at a personal cost (although prior research has found that chimpanzees aren’t so kind).
“Many of the things we’re proudest of—what makes us human—are also found in other primates,” she told the crowd.
Santos recently received the American Psychological Association’s Distinguished Scientific Award for Early Career Contributions to Psychology and has been hailed as one of Popular Science magazine’s “Brilliant 10” Young Minds. She sat down with me after Being Human to discuss what she hopes to learn by getting inside of monkeys’ minds. Below is a condensed version of our conversation.
Jason Marsh: So you look at the evolutionary roots of pro-social behavior in primates, and today you discussed the evolutionary roots of decision making. What do you see as the nexus between the two?
Laurie Santos: We don’t know for sure, but we’re getting hints that some of the mechanisms that we see in decision making might actually be there for social comparisons and social judgments. They’re actually very good in the context of figuring out what’s fair and what’s unfair: You need reference points, rather than thinking in terms of absolutes, to figure out, How much should I groom you today? Well, I want to groom you as much as you groomed me, right?
We also see mechanisms for things like loss aversion: You need some kind of capacity to get upset if you’re not getting as much as you should be getting, right?
I think there’s this interesting connection that folks haven’t made yet, that some of these decision-making biases we’re tapping into might be there for a much richer social purpose.
JM: So do you see moral decision making and pro-social behavior just resulting from some sort of grand calculus that primates make, based on what they’ve received, what they’ve given, and what they expect to get in return?
LS: I guess it depends on what you mean. My guess is that we’re probably wired a lot to think about other individuals’ preferences and other individuals’ goals. And at least some primates seem to share a very human-like motivation to do something nice for others, at least at no cost to themselves, or to care about equal distributions.
The questions we’re following up on now are where humans are different—there seem to be some obvious spots. It looks like humans might be one of the few species that care, in the context of fairness, not just about “disadvantageous inequity aversion”—cases where you’re getting less—but also “advantageous inequity aversion,” cases where you get more. So far, there’s no evidence of any nonhuman animal worrying about cases where in some mutual exchange, they’re getting more than somebody else. Whereas of course there’s tons of cases in the human domain, like with Occupy Wall Street—some of the one percent might be out there rallying with the other 99 percent.
So that’s one context. The other context is pro-social behavior. Most of the pro-social results we’ve seen so far tend to be in cases where there’s no cost to the animal—all things being equal, will you be nice to someone else? The amazing thing is, even in that context, we see some primates who choose not to be nice—chimpanzees, for example, are indifferent to helping even when it’s at no cost.
JM: When we don’t see evidence of pro-social behavior in chimpanzees or other non-human primates, what does that suggest to you about the nature of pro-social behavior in humans?
LS: The honest answer, I think, is: We’re not sure yet. I think in comparative work, we often perhaps naively expect a rather simple pattern: We’ll look at other species, and they’ll either be like us or not. And if they are like us, then maybe all of them will be like us, or maybe chimpanzees and bonobos will be like us and other primates, not so much.
We rarely see these patterns. And the case of pro-social behavior has been one that’s really hard to reconcile for researchers because it doesn’t seem like the pattern we get is predicted by relatedness to humans, it doen’t seem to be predicted by different social or mating systems, it doesn’t seem to be predicted by the relatedness of the subjects or the difficulty of the task. It just seems to be really idiosyncratic, which is a hard pattern for us to interpret for what it means for humans.
But I think one thing you can say is that at least in some species, we see rudiments of the kind of cognitive capacities you need to build a pro-social creature. And I think those rudiments we see even as early as New World monkeys, or maybe even as far across as canines. That suggests that some of the capacities you need to be a pro-social creature evolutionarily are pretty old.
JM: So based on your work and other people’s work, to what extent do you believe we humans have strong propensities for altruism and pro-social behavior?
LS: It would be hard to deny, right? One of the new things coming out is work using these cool, cognitive science techniques to try to get at this question: Is altruism the natural process, or is altruism a very “top-down” thing, where I’m inhibiting my selfish urges?
I think the surprising message from some of that work is that it’s not very top-down. In fact, if you give people extra time, or you give them instructions, you end up seeing less altruistic behavior. David Rand, who’s a post-doc at Harvard, has done new work showing that when you make people go quickly, you get more altruistic kinds of donations.
Basically, it doesn’t seem like people have selfish urges and that’s their natural tendency—people aren’t like, “Oh, I’m going to clamp down on those selfish urges and be a nice guy for reputation.” It seems like the instinct is to be nice, to be pro-social, to punish when fairness norms are violated even if not for yourself, and it takes work to be like, “No I’m going to be an economist now and be a jerk.”
JM: Why does that matter? For a teacher or a parent, why does it matter whether altruism is something that seems to be more instinctive or something that is learned?
LS: I think, ultimately, if we want to promote that behavior, we have to know where it comes from. And we promote it differently if we know that’s going to be your instinctual tendency that you’re going to do fast and as soon as you see some stimulus that causes you to want to be altruistic—versus, that’s the kind of thing we can cultivate slowly over time, but we constantly have to worry about selfish, underlying urges in there.
My guess is that if you want to design policies that promote these sorts of behaviors, it’s helpful to know what the initial state is. And it should make us worried that what we do with lots of time and experience is to make people less egalitarian and more selfish and economist-like.
Comments