We all face difficult moral choices, benefitting some people over others. Do we give money to a homeless person we pass on the street or save it for a homeless shelter? Should we support regulations that reduce carbon emissions and improve the environment for future generations, even if that might hurt some people’s livelihoods today?
These decisions can often be influenced by psychological biases. For instance, we tend to show more kindness toward people who we see as part of our “tribe,” which can make us act unfairly toward members of other groups. Or we might support policies that we think will be good for us, even if they won’t be good for many other people.
How can we make better, less biased decisions? A new study points to one solution: a mental trick called a “veil of ignorance.”
Philosophers have long argued that when we approach tough moral decisions, we should imagine putting ourselves behind a veil of ignorance, meaning that we don’t know our lot in life—so we don’t know whether we’ll actually benefit from a decision or not. Their assumption is that operating from behind a veil of ignorance will guide us to decisions that are more fair and truly benefit the greater good.
The new research, published in the journal PNAS, puts that theory to the test. In a series of seven different experiments, researchers asked a diverse group of individuals to ponder moral dilemmas, and they observed how inducing a veil of ignorance (VOI) affected their thinking.
In one experiment, for example, they asked participants to imagine a hospital with limited supplies of oxygen, where removing one patient from oxygen would save the lives of nine earthquake victims. Half the participants were prompted to use VOI thinking: They imagined that they could be any of the 10 people involved in this scenario—meaning, they had a 1 in 10 chance of being the current patient on oxygen or a 9 in 10 chance of being one of the earthquake victims. The other half of the participants were not given this prompting.
Then the participants chose whether, if they were in charge, they would take the current patient off of oxygen to save the others and how moral or immoral that decision would be. Those prompted with VOI thinking chose to take the patient off oxygen significantly more often than those not prompted, and they felt their choice was more morally sound, as well.
Study co-author Joshua Greene says he finds this result encouraging. Assuming that the moral decision is the one that benefits the greatest number of people, it implies that people can overcome a natural aversion to doing the right thing even when it makes them uncomfortable.
“If we think that it’s good when people make choices that promote the greater good, then [the veil of ignorance] is interesting because it seems to push people in that direction,” says Greene, a professor of psychology at Harvard University.
To further test this idea in a situation with real-world consequences, Greene and his colleagues asked American participants to choose between two charities: one in India, where $200 would cure two people of blindness, and one in the United States, where $200 would cure one person of blindness. The researchers said they were going to select one of the participants at random and have their decision actually determine where a real $200 would go.
People prompted to use VOI thinking—meaning that they had to imagine that they could be Indian or American—chose to give to the Indian charity much more frequently than those who didn’t use VOI thinking. This suggests that people using VOI thinking will be less likely to automatically favor someone similar to themselves—e.g., fellow Americans—and more likely to make decisions that ultimately benefit more people.
“People are naturally inclined towards those who are closer to them—literally or socially. That’s what makes them more likely to give to that person,” says Greene. “But thinking about the question in this way gives greater weight to concerns for impartiality.”
Further testing revealed that introducing VOI thinking wasn’t just unconsciously priming people or manipulating them, nor were people simply relying on mathematical probabilities when making choices. Instead, Greene believes, the abstract reasoning prompted by the thought experiment helped participants to overcome biases—including our tendency to be more empathic toward people we see as like us—that might otherwise get in the way of fairness. Greene believes this may be a promising way to encourage the greater good, because this basic intellectual exercise might be easier for people than actually expanding their circle of care—though the outcomes might be the same.
“The intervention doesn’t require any warm fellow feeling for humanity—it’s just asking the question, ‘What would want I under this assumption of equal probability?’” Greene says.
Interestingly, people’s choices didn’t differ based on their race, gender, or other characteristics, says Greene. This makes him optimistic that the technique could help a lot of people let go of their individual biases when trying to make moral choices.
Encouraging VOI thinking could have real-world consequences and translate into better decision making—not just for individuals but for groups, he says. If encouraged to look beyond their biases, policymakers might create better policies, or be able to convince disapproving constituents to consider the greater good: to see policies that help more people as fairer and more moral.
“People in our study are doing this privately by themselves,” says Greene. “But my hope is that this would be even more powerful in a group context,”