Should We Trust Positive Psychology?By Robert Biswas-Diener | October 7, 2015 | 0 comments
Studies of human strengths are not being replicated. Does the field face a crisis—or an opportunity?
Does thinking about the elderly cause young people to act more like old folks themselves?
One 1996 study said yes. Psychologist John Bargh and his colleagues asked one group of participants to complete a word search puzzle in which hidden words—like “stubborn,” “alone,” and “Florida”—were all related to stereotypes of the elderly.
After completing the puzzle, the researchers found that these same participants walked more slowly when compared with people from a control group. It is amazing to think that it might be possible to sub-consciously “prime” behaviors simply by exposing people to an idea—and Bargh’s study was widely covered by the media and cited by other researchers.
Unfortunately, the results weren’t replicated. Two attempts to re-create that original study met with no success. In fact, in one of the later studies, participants exposed to the same words walked faster!
Recently, a team of 270 psychologists, led by Brian Nosek at the University of Virginia, tried to reproduce 100 experiments to see if their findings still hold up. The result of the three-year initiative is a public-relations nightmare for social and cognitive psychology: The team got the same results in only one-third of the studies.
This might be especially troubling for positive psychology, the scientific study of factors that allow individuals and communities to flourish. Heavy commercialization of this relatively new field—and the fact that it is an applied science—leads to a consumer economy of blogs, books, and seminars that favors single study results. I am, admittedly, a part of this process, and so is the nonprofit publication, Greater Good, in which these words appear.
In this context, replication is especially important—and the field’s replication crisis presents us with a great opportunity to explain why. It is also a reminder that science always needs to take the long view. Consumers increasingly want comprehensive theories of happiness and finalized positivity ratios—when we are, at best, a few decades old as a science. So where does that leave us? Should the average person and practitioner just ignore positive psychology until it grows up?
My answer is no, in part because we can learn a great deal from the exploration itself.
The replication crisis
Many people assume that non-replication is a terrible state of affairs. At first glance, it appears to undermine scientific authority or—even worse—be suggestive of outright chicanery. Not so. Replication (and, by extension, non-replication) is simply part of the iterative scientific process. One researcher conducts a study and, later, a second researcher conducts a similar study to evaluate the accuracy of the first.
The failure of a later study to corroborate the findings of an earlier one could be due to a variety of factors. The most obvious is that the initial finding was the product of chance, sampling bias, or weak methodology. In this case, the non-replication is useful for alerting us all to the problem so that we can appropriately dismiss the original results. Journals and editors that support replication are offering a critical tool for the rest of us: studies can be sewn together to create a patchwork of knowledge in which we can be confident.
It is also possible, however, that non-replication simply indicates that the original findings don’t extend to a different sample, even though the core concept may still be true. Discovering that gratitude boosts happiness among some people, for example, does not necessarily mean that it can be expected to do the same for all people.
In this sense, non-replication is less about dismissing errant findings than it is about charting the boundaries of generalizability: does a particular finding, or intervention, hold equally true for all groups across all time?
Positive psychology needs replication
This is where the replication crisis and positive psychology collide. Positive psychology is a young science and, to a large degree, has turned research attention on areas that have historically been the domain of philosophers and theologians. We often grapple with broad and fuzzy-sounding topics like happiness and hope. Even so, studying internal states is what psychologists do—and positive psychological science must be held to the same standards of rigor as the larger field.
Unfortunately, its track record isn’t great. Uli Schimmack, a researcher at University of Toronto, has created the “R-index” a statistical tool for estimating the replicability of studies, of researchers, and of journals. Schimmack describes the index as a “doping test” for psychological science. Distressingly, The Journal of Positive Psychology received a letter grade evaluation of “C.” Further, Schimmack found that the R-index for this journal has remained fairly constant across time.
Time and experience might help. The first piece of research I ever published was a study of the happiness of some of the poorest citizens of Kolkata, India. I discovered that homeless people were somewhat dissatisfied with their lives but that slum dwellers reported being mildly satisfied with many aspects of life, such as their families. I am proud of the fact that I was able to collect data with a difficult to contact group often overlooked by researchers. That said, my findings could easily be the product of a chance sample.
This is why, years later, I collected new data from a matched sample of homeless people. In this follow-up study I found that they reported very mild satisfaction with their lives. The second study helps us feel more confident in the findings but it still tells us little about the potential happiness of poor people in other parts of the world, or even in other parts of India.
Consider a classic positive psychology intervention study in this context. The benefits of tracking gratitude are widely documented. It has been linked in the research to increased happiness and decreased depression.
However, follow-up studies are painting a more complex portrait. Logging gratitude (rather than expressing it) might be favorable for some cultural groups. People with needy personality styles might suffer from lower self-esteem as the result of gratitude exercises. Religious people might benefit more from expressing gratitude to God rather than general gratitude.
The tonic benefits of common gratitude exercises, therefore, should be subject to on-going scrutiny so that we can gain a better understanding of when this exercise might be advisable, for whom, and how it should be administered. Replication becomes a critical tool for understanding how interventions are best delivered in local contexts.
Positive psychology needs caution
This is why we—researchers, journalists, and practitioners alike—should be cautious about reporting the results of a single study. It is generally more sophisticated to report on general conclusions, arrived at through a synthesis of the results of multiple studies.
Even so, the allure of the single study is hard to resist, and I myself am guilty of focusing on cool results of sexy studies.
This temptation is especially interesting to consider in the context of positive psychology as an applied science. As such, results are often disseminated through a consumer economy, so that people can put the research to work in improving their lives. This means that the flashiest, least nuanced results are reported. It is easier to say, “Writing down three things for which you are appreciative will boost your happiness,” than it is to say, “Writing down three things for which you are appreciative may boost happiness depending on a range of factors including motivational factors, personality factors, cultural factors, and the presence or absence of mental illness. It could also lower happiness. Or maybe do nothing.”
In an era where blogs and chat rooms are increasingly the vehicles of dissemination, replicated studies become increasingly important. What does this mean, practically, for those of us who report on and use positive psychological science?
First, it means that there is a special responsibility to keep up with new developments in the field. In my opinion, the minimum threshold for this is subscribing to journals and attending conferences. In other words, regularly being a consumer of “primary source” material in science. This can help protect against out-dated information (such as the so-called 3:1 positivity ratio) as well as promote a more nuanced understanding of concepts (for example, just because one study of the 3:1 ratio was retracted doesn’t mean that some positivity ratio does not exist).
In addition to being open to new changes—and being aware of them when they occur—it can also be helpful for those reporting on positive psychology to emphasize syntheses rather than highlights from a single piece of research.
Positive psychology needs to be (sometimes) wrong
Finally, I would like to offer a defense of studies that are “wrong.”
We want to create an environment that promotes bold exploration. We do not want positive psychology researchers to have to tip-toe around their studies for fear of non-replication or other forms of external criticism.
Instead, we want to promote an atmosphere where scientists feel comfortable asking tough, counter-intuitive, and even outlandish questions. We want them to test whether a gratitude activity works better than eating ice-cream (true study!) and we want them to ask whether mindfulness could ever be harmful. If later studies provide different results then we have let science run its course.
Let’s take an example from my own research. In 2005 my colleagues and I published a paper in which we made the claim that most people are happy. To back up our argument we presented data from three far-flung cultural groups: Amish farmers, Maasai tribal people, and members of an Inuit community in Greenland. Overall, I am proud of this study as it represents very difficult to collect data. In fact, it took me six months and a dozen flights.
Here’s the problem: Although the data are good—that is, we didn’t fabricate our findings and we used well established methods—they are not necessarily “true.” In the case of the Amish I was only able to interview 52 people living in two communities. While we took every effort to recruit a wide sample of men and women, young and old, it is very possible the findings of this particular sample do not generalize to the Amish more widely. Still, the fact that this sample reported moderate happiness supports our conclusion.
In the end, however, my colleagues and I need to be prepared for the possibility that someday, someone might come along with a far larger and—let’s face it—better sample showing very different results than those we found.
This leaves us—all of us; not just my colleagues and I—with the important reminder that “what we know,” scientifically speaking, might better be thought of as “what we know right now.” Positive psychology is a terrific avenue to explore and understand the best aspects of humanity. It is the dynamic aspect of science—that we can change, replace, and improve our knowledge—that helps make it so.
Greater Good wants to know:
Do you think this article will influence your opinions or behavior?
About The Author
Robert Biswas-Diener, Ph.D., is the author of The Courage Quotient and Happiness. He is widely known as the "Indiana Jones of Positive Psychology" because his research on happiness has taken him to such far flung places as Greenland, India, and Kenya. Robert is Managing Director at Positive Acorn and lives in Portland, Oregon. He is also the co-author of The Upside of Your Dark Side: Why Being Your Whole Self—Not Just Your "Good" Self—Drives Success and Fulfillment.