Debates have raged on social media, around dinner tables, on TV, and in Congress about the science of COVID-19. Is it really worse than the flu? How necessary are lockdowns? Do masks work to prevent infection? What kinds of masks work best? Is the new vaccine safe?
You might see friends, relatives, and coworkers offer competing answers, often brandishing studies or citing individual doctors and scientists to support their positions. With so much disagreement—and with such high stakes—how can we use science to make the best decisions?
Here at Greater Good, we cover research into social and emotional well-being, and we try to help people apply findings to their personal and professional lives. We are well aware that our business is a tricky one.
Summarizing scientific studies and distilling the key insights that people can apply to their lives isn’t just difficult for the obvious reasons, like understanding and then explaining formal science terms or rigorous empirical and analytic methods to non-specialists. It’s also the case that context gets lost when we translate findings into stories, tips, and tools, especially when we push it all through the nuance-squashing machine of the Internet. Many people rarely read past the headlines, which intrinsically aim to be relatable and provoke interest in as many people as possible. Because our articles can never be as comprehensive as the original studies, they almost always omit some crucial caveats, such as limitations acknowledged by the researchers. To get those, you need access to the studies themselves.
And it’s very common for findings and scientists to seem to contradict each other. For example, there were many contradictory findings and recommendations about the use of masks, especially at the beginning of the pandemic—though as we’ll discuss, it’s important to understand that a scientific consensus did emerge.
Given the complexities and ambiguities of the scientific endeavor, is it possible for a non-scientist to strike a balance between wholesale dismissal and uncritical belief? Are there red flags to look for when you read about a study on a site like Greater Good or hear about one on a Fox News program? If you do read an original source study, how should you, as a non-scientist, gauge its credibility?
Here are 11 questions you might ask when you read about the latest scientific findings about the pandemic, based on our own work here at Greater Good.
1. Did the study appear in a peer-reviewed journal?
In peer review, submitted articles are sent to other experts for detailed critical input that often must be addressed in a revision prior to being accepted and published. This remains one of the best ways we have for ascertaining the rigor of the study and rationale for its conclusions. Many scientists describe peer review as a truly humbling crucible. If a study didn’t go through this process, for whatever reason, it should be taken with a much bigger grain of salt.
“When thinking about the coronavirus studies, it is important to note that things were happening so fast that in the beginning people were releasing non-peer reviewed, observational studies,” says Dr. Leif Hass, a family medicine doctor and hospitalist at Sutter Health’s Alta Bates Summit Medical Center in Oakland, California. “This is what we typically do as hypothesis-generating but given the crisis, we started acting on them.”
In a confusing, time-pressed, fluid situation like the one COVID-19 presented, people without medical training have often been forced to simply defer to expertise in making individual and collective decisions, turning to culturally vetted institutions like the Centers for Disease Control (CDC). Is that wise? Read on.
2. Who conducted the study, and where did it appear?
“I try to listen to the opinion of people who are deep in the field being addressed and assess their response to the study at hand,” says Hass. “With the MRNA coronavirus vaccines, I heard Paul Offit from UPenn at a UCSF Grand Rounds talk about it. He literally wrote the book on vaccines. He reviewed what we know and gave the vaccine a big thumbs up. I was sold.”
From a scientific perspective, individual expertise and accomplishment matters—but so does institutional affiliation.
Why? Because institutions provide a framework for individual accountability as well as safety guidelines. At UC Berkeley, for example, research involving human subjects during COVID-19 must submit a Human Subjects Proposal Supplement Form, and follow a standard protocol and rigorous guidelines. Is this process perfect? No. It’s run by humans and humans are imperfect. However, the conclusions are far more reliable than opinions offered by someone’s favorite YouTuber.
Recommendations coming from institutions like the CDC should not be accepted uncritically. At the same time, however, all of us—including individuals sporting a “Ph.D.” or “M.D.” after their names—must be humble in the face of them. The CDC represents a formidable concentration of scientific talent and knowledge that dwarfs the perspective of any one individual. In a crisis like COVID-19, we need to defer to that expertise, at least conditionally.
“If we look at social media, things could look frightening,” says Hass. When hundreds of millions of people are vaccinated, millions of them will be afflicted anyway, in the course of life, by conditions like strokes, anaphylaxis, and Bell’s palsy. “We have to have faith that people collecting the data will let us know if we are seeing those things above the baseline rate.”
3. Who was studied, and where?
Animal experiments tell scientists a lot, but their applicability to our daily human lives will be limited. Similarly, if researchers only studied men, the conclusions might not be relevant to women, and vice versa.
Many psychology studies rely on WEIRD (Western, educated, industrialized, rich and democratic) participants, mainly college students, which creates an in-built bias in the discipline’s conclusions. Historically, biomedical studies also bias toward gathering measures from white male study participants, which again, limits generalizability of findings. Does that mean you should dismiss Western science? Of course not. It’s just the equivalent of a “Caution,” “Yield,” or “Roadwork Ahead” sign on the road to understanding.
This applies to the coronavirus vaccines now being distributed and administered around the world. The vaccines will have side effects; all medicines do. Those side effects will be worse for some people than others, depending on their genetic inheritance, medical status, age, upbringing, current living conditions, and other factors.
For Hass, it amounts to this question: Will those side effects be worse, on balance, than COVID-19, for most people?
“When I hear that four in 100,000 [of people in the vaccine trials] had Bell’s palsy, I know that it would have been a heck of a lot worse if 100,000 people had COVID. Three hundred people would have died and many others been stuck with chronic health problems.”
4. How big was the sample?
In general, the more participants in a study, the more valid its results. That said, a large sample is sometimes impossible or even undesirable for certain kinds of studies. During COVID-19, limited time has constrained the sample sizes.
However, that acknowledged, it’s still the case that some studies have been much larger than others—and the sample sizes of the vaccine trials can still provide us with enough information to make informed decisions. Doctors and nurses on the front lines of COVID-19—who are now the very first people being injected with the vaccine—think in terms of “biological plausibility,” as Hass says.
Did the admittedly rushed FDA approval of the Pfizer-BioNTech vaccine make sense, given what we already know? Tens of thousands of doctors who have been grappling with COVID-19 are voting with their arms, in effect volunteering to be a sample for their patients. If they didn’t think the vaccine was safe, you can bet they’d resist it. When the vaccine becomes available to ordinary people, we’ll know a lot more about its effects than we do today, thanks to health care providers paving the way.
5. Did the researchers control for key differences, and do those differences apply to you?
Diversity or gender balance aren’t necessarily virtues in experimental research, though ideally a study sample is as representative of the overall population as possible. However, many studies use intentionally homogenous groups, because this allows the researchers to limit the number of different factors that might affect the result.
While good researchers try to compare apples to apples, and control for as many differences as possible in their analyses, running a study always involves trade-offs between what can be accomplished as a function of study design, and how generalizable the findings can be.
You also need to ask if the specific population studied even applies to you. For example, when one study found that cloth masks didn’t work in “high-risk situations,” it was sometimes used as evidence against mask mandates.
However, a look beyond the headlines revealed that the study was of health care workers treating COVID-19 patients, which is a vastly more dangerous situation than, say, going to the grocery store. Doctors who must intubate patients can end up being splattered with saliva. In that circumstance, one cloth mask won’t cut it. They also need an N95, a face shield, two layers of gloves, and two layers of gown. For the rest of us in ordinary life, masks do greatly reduce community spread, if as many people as possible are wearing them.
6. Was there a control group?
One of the first things to look for in methodology is whether the population tested was randomly selected, whether there was a control group, and whether people were randomly assigned to either group without knowing which one they were in. This is especially important if a study aims to suggest that a certain experience or treatment might actually cause a specific outcome, rather than just reporting a correlation between two variables (see next point).
For example, were some people randomly assigned a specific meditation practice while others engaged in a comparable activity or exercise? If the sample is large enough, randomized trials can produce solid conclusions. But, sometimes, a study will not have a control group because it’s ethically impossible. We can’t, for example, let sick people go untreated just to see what would happen. Biomedical research often makes use of standard “treatment as usual” or placebos in control groups. They also follow careful ethical guidelines to protect patients from both maltreatment and being deprived necessary treatment. When you’re reading about studies of masks, social distancing, and treatments during the COVID-19, you can partially gauge the reliability and validity of the study by first checking if it had a control group. If it didn’t, the findings should be taken as preliminary.
7. Did the researchers establish causality, correlation, dependence, or some other kind of relationship?
We often hear “Correlation is not causation” shouted as a kind of battle cry, to try to discredit a study. But correlation—the degree to which two or more measurements seem connected—is important, and can be a step toward eventually finding causation—that is, establishing a change in one variable directly triggers a change in another. Until then, however, there is no way to ascertain the direction of a correlational relationship (does A change B, or does B change A), or to eliminate the possibility that a third, unmeasured factor is behind the pattern of both variables without further analysis.
In the end, the important thing is to accurately identify the relationship. This has been crucial in understanding steps to counter the spread of COVID-19 like shelter-in-place orders. Just showing that greater compliance with shelter-in-place mandates was associated with lower hospitalization rates is not as conclusive as showing that one community that enacted shelter-in-place mandates had lower hospitalization rates than a different community of similar size and population density that elected not to do so.
We are not the first people to face an infection without understanding the relationships between factors that would lead to more of it. During the bubonic plague, cities would order rodents killed to control infection. They were onto something: Fleas that lived on rodents were indeed responsible. But then human cases would skyrocket.
Why? Because the fleas would migrate off the rodent corpses onto humans, which would worsen infection. Rodent control only reduces bubonic plague if it’s done proactively; once the outbreak starts, killing rats can actually make it worse. Similarly, we can’t jump to conclusions during the COVID-19 pandemic when we see correlations.
8. Are journalists and politicians, or even scientists, overstating the result?
Language that suggests a fact is “proven” by one study or which promotes one solution for all people is most likely overstating the case. Sweeping generalizations of any kind often indicate a lack of humility that should be a red flag to readers. A study may very well “suggest” a certain conclusion but it rarely, if ever, “proves” it.
This is why we use a lot of cautious, hedging language in Greater Good, like “might” or “implies.” This applies to COVID-19 as well. In fact, this understanding could save your life.
When President Trump touted the advantages of hydroxychloroquine as a way to prevent and treat COVID-19, he was dramatically overstating the results of one observational study. Later studies with control groups showed that it did not work—and, in fact, it didn’t work as a preventative for President Trump and others in the White House who contracted COVID-19. Most survived that outbreak, but hydroxychloroquine was not one of the treatments that saved their lives. This example demonstrates how misleading and even harmful overstated results can be, in a global pandemic.
9. Is there any conflict of interest suggested by the funding or the researchers’ affiliations?
A 2015 study found that you could drink lots of sugary beverages without fear of getting fat, as long as you exercised. The funder? Coca Cola, which eagerly promoted the results. This doesn’t mean the results are wrong. But it does suggest you should seek a second opinion: Has anyone else studied the effects of sugary drinks on obesity? What did they find?
It’s possible to take this insight too far. Conspiracy theorists have suggested that “Big Pharma” invented COVID-19 for the purpose of selling vaccines. Thus, we should not trust their own trials showing that the vaccine is safe and effective.
But, in addition to the fact that there is no compelling investigative evidence that pharmaceutical companies created the virus, we need to bear in mind that their trials didn’t unfold in a vacuum. Clinical trials were rigorously monitored and independently reviewed by third-party entities like the World Health Organization and government organizations around the world, like the FDA in the United States.
Does that completely eliminate any risk? Absolutely not. It does mean, however, that conflicts of interest are being very closely monitored by many, many expert eyes. This greatly reduces the probability and potential corruptive influence of conflicts of interest.
10. Do the authors reference preceding findings and original sources?
The scientific method is based on iterative progress, and grounded in coordinating discoveries over time. Researchers study what others have done and use prior findings to guide their own study approaches; every study builds on generations of precedent, and every scientist expects their own discoveries to be usurped by more sophisticated future work. In the study you are reading, do the researchers adequately describe and acknowledge earlier findings, or other key contributions from other fields or disciplines that inform aspects of the research, or the way that they interpret their results?
This was crucial for the debates that have raged around mask mandates and social distancing. We already knew quite a bit about the efficacy of both in preventing infections, informed by centuries of practical experience and research.
When COVID-19 hit American shores, researchers and doctors did not question the necessity of masks in clinical settings. Here’s what we didn’t know: What kinds of masks would work best for the general public, who should wear them, when should we wear them, were there enough masks to go around, and could we get enough people to adopt best mask practices to make a difference in the specific context of COVID-19?
Over time, after a period of confusion and contradictory evidence, those questions have been answered. The very few studies that have suggested masks don’t work in stopping COVID-19 have almost all failed to account for other work on preventing the disease, and had results that simply didn’t hold up. Some were even retracted.
So, when someone shares a coronavirus study with you, it’s important to check the date. The implications of studies published early in the pandemic might be more limited and less conclusive than those published later, because the later studies could lean on and learn from previously published work. Which leads us to the next question you should ask in hearing about coronavirus research…
11. Do researchers, journalists, and politicians acknowledge limitations and entertain alternative explanations?
Is the study focused on only one side of the story or one interpretation of the data? Has it failed to consider or refute alternative explanations? Do they demonstrate awareness of which questions are answered and which aren’t by their methods? Do the journalists and politicians communicating the study know and understand these limitations?
When the Annals of Internal Medicine published a Danish study last month on the efficacy of cloth masks, some suggested that it showed masks “make no difference” against COVID-19.
The study was a good one by the standards spelled out in this article. The researchers and the journal were both credible, the study was randomized and controlled, and the sample size (4,862 people) was fairly large. Even better, the scientists went out of their way to acknowledge the limits of their work: “Inconclusive results, missing data, variable adherence, patient-reported findings on home tests, no blinding, and no assessment of whether masks could decrease disease transmission from mask wearers to others.”
Unfortunately, their scientific integrity was not reflected in the ways the study was used by some journalists, politicians, and people on social media. The study did not show that masks were useless. What it did show—and what it was designed to find out—was how much protection masks offered to the wearer under the conditions at the time in Denmark. In fact, the amount of protection for the wearer was not large, but that’s not the whole picture: We don’t wear masks mainly to protect ourselves, but to protect others from infection. Public-health recommendations have stressed that everyone needs to wear a mask to slow the spread of infection.
“We get vaccinated for the greater good, not just to protect ourselves ”
As the authors write in the paper, we need to look to other research to understand the context for their narrow results. In an editorial accompanying the paper in Annals of Internal Medicine, the editors argue that the results, together with existing data in support of masks, “should motivate widespread mask wearing to protect our communities and thereby ourselves.”
Something similar can be said of the new vaccine. “We get vaccinated for the greater good, not just to protect ourselves,” says Hass. “Being vaccinated prevents other people from getting sick. We get vaccinated for the more vulnerable in our community in addition for ourselves.”
Ultimately, the approach we should take to all new studies is a curious but skeptical one. We should take it all seriously and we should take it all with a grain of salt. You can judge a study against your experience, but you need to remember that your experience creates bias. You should try to cultivate humility, doubt, and patience. You might not always succeed; when you fail, try to admit fault and forgive yourself.
Above all, we need to try to remember that science is a process, and that conclusions always raise more questions for us to answer. That doesn’t mean we never have answers; we do. As the pandemic rages and the scientific process unfolds, we as individuals need to make the best decisions we can, with the information we have.
This article was revised and updated from a piece published by Greater Good in 2015, “10 Questions to Ask About Scientific Studies.”
Comments