How to Close the Gap Between Us and ThemBy Jill Suttie | November 7, 2013 | 0 comments
A Q&A with Moral Tribes author Joshua Greene about emotion, reason, and conflict.
How can one group of people be convinced that abortion is morally wrong, while another sees abortion as a woman’s right to choose? Why do Republicans tend to favor the death penalty as morally just, while many Democrats find it morally repugnant? And why do we keep fighting about these and other “moral” issues?
To help answer that, I spoke with Joshua Greene, the John and Ruth Hazel Associate Professor of the Social Sciences and director of the Moral Cognitions Lab at Harvard University. He and his colleagues study the psychological processes and neural systems that are involved in making moral choices.
Greene’s new book, Moral Tribes, describes this research, along with other insights into how psychology shapes our moral thinking. The book not only helps explain why we humans sometimes find ourselves at odds over moral issues but also suggests how we can use that knowledge to transcend moral conflicts and find solutions to problems that plague our nation and the world.
Greene was recently in Berkeley to discuss his findings at the Society of Experimental Social Psychology conference, where I caught up with him.
Jill Suttie: In your book, you talked about moral decisions that are made within “tribes” as being different than those that are made between tribes. How so?
Joshua Greene: The fundamental moral problem is one of cooperation, which is getting a pair or a group of people to do what’s best for the group as opposed to what it best for the individual.
An illustration of this, made famous by the ecologist Garrett Hardin, is a tribe of herders who raise sheep on a common pasture. The herders ask themselves, Should I add another animal to my herd?
Well, it makes sense from a selfish perspective for everyone to grow their herd. But if everybody does that, suddenly there are more animals than the pasture can support, and all of them die. That’s the “tragedy of the commons”—individually rational behavior leading to collective ruin.
What’s the solution? Morality. We agree that we’re going to limit our individual herds for the greater good.
This is the type of problem—a Me versus Us problem—that I think humans evolved to solve. We have a suite of emotional capacities that enable us to do that: We have positive emotions that make us want to be cooperative, to care about others’ wellbeing. We have negative feelings that make us want to be cooperative, too—I would feel ashamed or guilty if I were to take too much of the commons for myself. Then we have positive feelings, such as gratitude or admiration, that motivate us to reward others for being cooperative, and negative feelings, such as anger and contempt, that do the same thing. Those social emotions enable us to get along with other people.
But modern moral controversies aren’t about being selfish versus doing good for people in this straightforward way. They’re more complicated.
JS: So how are modern moral controversies different than the “tragedy of the commons?”
JG: To start, imagine a group of herders who are pure communists—they share their pasture and they share their herd, and that’s how they solve the problem: They have everything in common. Now imagine another group of herders who are pure individualists. They say, ‘We’re not going to have a shared pasture. We’re going to divide it up—privatize the pasture—so that we each get our own land and our own herd. And we’re going to respect each other’s property rights.’
These are two different ways of being cooperative—cooperation on different terms. A lot of our political disputes are about individualism versus collectivism: To what extent are we each responsible for ourselves, and to what extent are we all in this together? We see this, for example, in issues such as the health care debate and climate change. The modern moral tragedy is not a simple problem of selfishness versus morality—Me versus Us. It’s different tribes with different moral ideals occupying the same space. It’s Us versus Them—their values versus our values, or their interests versus our interests.
The problem is even more complicated because groups not only have different ideas about how to cooperate; they have different histories, religions, leaders, heroes, and holy books that tell them what’s right. This exacerbates the problem of Us versus Them. Different groups rally around different moral authorities, different “proper nouns” such as the Christian Bible versus the Koran.
So one of the main ideas of the book is that when it comes to everyday morality—being selfish versus being good to other people—your moral intuitions are likely to serve you well. Our moral emotions evolved to solve the Me versus Us problem, the tragedy of the commons. But when it comes to Us versus Them, what I call the “tragedy of the commonsense morality,” then our gut reactions are the problem. And that’s when we need to stop and think and be more reflective.
JS: How can we disengage and reflect when faced with moral dilemmas like the ones that exist between groups?
JG: An important tool is just awareness—understanding that it’s your gut reaction if you judge this way instead of that way, and that the people on the other side have different gut reactions, too.
But awareness isn’t enough. You’ve got your gut reactions and I’ve got mine—but what should we do? What we need is what I call a “meta-morality.” A morality is what allows the individuals within a group to get along, to turn a bunch of separate “Me”s into an Us. A meta-morality, then, performs the same function at a higher level, allowing groups to get along. A meta-morality adjudicates among competing moral systems, just as a first-order moral system adjudicates among competing individuals.
The meta-morality that I favor has historically been known as “utilitarianism,” but that’s a very bad name for it. I prefer to call it “deep pragmatism,” a name that gives a clearer sense of what it’s really about. Deep pragmatism boils down to this: Maximize happiness impartially. Try to make life as happy as possible overall, giving equal weight to everyone’s happiness.
It’s a meta-morality, because it’s a system. Unlike simple rules such as “don’t kill people,” deep pragmatism tells you how to make trade-offs, which is what a meta-morality needs to do. For example, suppose there is a conflict between the individual right to free speech and the rights of other people not to be harmed or offended. A deep pragmatist asks: What are the long-term consequences of allowing this kind of speech? What happens if we restrict it? Which option is most likely to lead to the best results?
JS: Has pure pragmatism ever been applied in the world?
JG: In a sense, this is the dominant framework among policy wonks—trying to estimate costs and benefits. But adding up costs and benefits is, ironically, not necessarily the decision procedure that is likely to produce the best results in all cases.
For one thing, we’re likely to be biased. Imagine standing in a store trying to estimate the costs and benefits of shoplifting. You’re better off just following the commonsense moral rule. But that higher level judgment—the judgment about how to decide—is itself a kind of pragmatic decision.
So part of it is paying attention to costs and benefits, and part of it is knowing when to just go with the simple rule. I think that the people who do this well are the people whom we describe as “principled, but practical.”
JS: It seems that in our society, leaders who are ambiguous about moral deciding—who take in more information before making a decision—are seen as weak rather than morally strong. How can things change if this is the dominant view?
JG: Change has to come from the bottom up. It won’t work to have a bunch of utilitarian policy wonks running things while people’s gut reactions are out of line with what the wonks are doing.
The key, then, is to change the way ordinary people think, and that requires a deeper, scientific understanding of our own minds—“Where are my judgments coming from?” It begins with the science of psychology. We’ve learned that our judgments can be very fickle, sensitive to things that, upon reflection, seem irrelevant—such as the physical distance between ourselves and people we can help—and insensitive to things that are very important, such as the number of people we can help.
JS: So is it always better to use more thoughtful reflection instead of gut reactions when making moral decisions?
JG: My metaphor for thinking about gut reactions and cognitive processes is the two modes we have for taking digital photos. If you’re doing something pretty standard, like taking a picture of a mountain from a mile away in broad daylight, then you can use one of the automatic settings—“landscape mode”—and it will likely turn out well.
But if you want to do something that the designers of your camera did not envision, you need flexibility. You put the camera in manual mode, and you can adjust everything yourself and do exactly what you want.
We can ask, ‘Which is better—the manual mode or the automatic settings?’ And the answer is that neither is better in any absolute sense. Automatic settings are better for most purposes—they are very efficient. But when you’re facing a more challenging problem, then manual mode is better. That’s when you need flexibility, rather than efficiency.
In the same way, the human brain has automatic settings and a manual mode. The automatic settings are our gut reactions, and our manual mode is our ability to stop and think and reason—especially about costs and benefits.
When you’re facing the moral problems of everyday life—“Should I do the thing I agreed to do, even though it’s now no longer convenient?”—there your gut reactions are more likely to be a good guide than rational calculation. But when you’re trying to decide what our policy should be about the death penalty, abortion, international conflicts, global warming—those are not the kinds of problems that our tribal gut reactions were designed to solve. Here we need to step back from our feelings and look at the evidence to figure out what is likely to produce the best results.
JS: We seem so clearly divided on these important questions, so much so that the sides can hardly to talk to each other. What do you suggest?
JG: My book gets into a lot of abstract philosophy and a lot of technical neuroscience, so I deliberately ended the book with commonsense guidelines for dealing with real-world problems
The first rule is that, when it comes to controversial moral issues, you should consult your gut feelings, but you shouldn’t trust them too much. When we have strong emotional disagreements, someone’s gut reactions have to be wrong, and maybe everyone’s are wrong.
An extension of this idea—and a more controversial one—is that we’re unlikely to settle our disagreements by arguing about rights. We talk about rights to make our gut reactions sound more rational. Whatever we feel, we can posit the existence of a right that corresponds to our feelings. So if I feel that it’s wrong to kill a fetus, I say it has a right to live. If I feel that it’s wrong to tell a woman that she can’t terminate her pregnancy, I say she has a right to choose. We have no procedure for figuring out who has which rights or which rights count more. The alternative approach is to focus instead on costs and benefits, and to focus on evidence concerning costs and benefits.
A second rule: Watch out for what I call “biased fairness.” Fairness comes in different forms. For example, paying everyone the same could be fair, but so could giving bonuses for better performance. “Biased fairness” means favoring the version of fairness that suits your selfish interests. It’s not a coincidence that most wealthy people tend to think taxes should be lower, or that people with lower incomes think it’s OK to have higher taxes to pay for more social services. Very rarely do people just come out and say, “I don’t care about other people; I’m just out for me.” Instead we choose the version of fairness that suits us best.
Another key idea is using common currency. If we’re not going to be talking about rights, because rights are really just dressing up our gut reactions, what’s our common currency? We need a common currency of facts, and we need a common currency of values. The currency of facts is science, broadly construed—the search for observable replicable evidence. It’s true that people tend to reject science if it conflicts with their worldview. But everybody appeals to science when it suits them, and no other source of knowledge has that distinction. Creationists would be delighted if, tomorrow, credible scientists were to declare that we’d got it all wrong and that the world is in fact just a few thousand years old. But biologists and geologists don’t appeal to the Pope when he happens to agree with their views. Science is our common ground.
When it comes to values, that’s really where deep pragmatism comes in. Believe what you want, value what you want, but the only way we can systematically make trade-offs, I think, is to appeal to consequences, giving equal weight to everyone’s interests. Some philosophers think there are other ways, but I don’t think they work.
JS: How might you apply this to a real-world dilemma?
JG: Relying on our gut reactions can be very counter-productive. Look at the state of our prisons—horrible, miserable places. You commit a crime, you end up there, you spend all of your time with other criminals. You’re not treated well by the authorities, you’re basically living in a “might makes right” kind of jungle, where justice is often quite arbitrary, sexual violence is rampant, and prisoners feel like they have no control. We send people to prison for 20 years for doing something bad; then when they come out, they’re completely unprepared to do anything productive. For years, all they’ve known is a world of criminals and unsympathetic abuse from authorities above them. This kind of prison system satisfies our taste for retribution—our desire to really stick it to people who break the rules. But in terms of actual results its counter-productive..
Our criminal justice system is very different from most others in the developed world. It’s gut reactions run rampant as opposed to thinking about what is actually going to produce the best results. Obviously, it’s a very complicated policy question, and I wouldn’t say that we should necessarily be “nice” to people who commit serious crimes. But I think that if we focused on achieving good outcomes, rather than satisfying our punitive impulses—what we exalt as “Justice”—we’d all be better off.
JS: What else do you find exciting about your research?
JG: I believe that we’re building something unprecedented in the natural world. Biological evolution is a competitive process. Our basic moral instincts toward other people evolved—not because they are “nice” but because being good to those in your group allows your group to out-compete other groups. And so our goodness evolved as a competitive weapon.
But the amazing thing is that we also have an ability to understand our own thinking. This general ability to understand things didn’t evolve as part of morality. It evolved just to help us solve problems in general. So, combining some basic moral impulses that evolved as a competitive weapon with this capacity to think and reason and understand creates something totally new. We can step outside of our tribal instincts and say, “It’s not just the people in my tribe that matter, everybody matters. And everybody matters equally.” It’s a thought that evolution never wanted us to have.
And it’s not just another biological oddity, like, “Oh, here’s an animal that’s got green spots—gee I’ve never seen that before.” Our evolving global tribe is unlike anything that’s ever evolved before, because it’s not evolving for a competitive purpose. It’s evolving simply because we think it’s good. The idea that our species is breaking free of nature’s ruthless rule—that’s pretty exciting to me.
About The Author
Jill Suttie, Psy.D., is Greater Good‘s book review editor and a frequent contributor to the magazine.