In 2020, I witnessed an online influence campaign so devious that I could not stop thinking about it—­in fact, it inspired me to begin researching my book, Stories Are Weapons, from which this essay is adapted.

Illustration of fists raised holding smartphones

The reason I found it so compelling was that it was a two-­stage psychological operation—or psyop, as the military would call it: First, the unknown operatives spread a wave of disinformation; next, they spread a second wave that was designed to inoculate people against any efforts to debunk the first wave of disinformation.

Here’s how it went down. One night I started to see posts on Twitter claiming that the entire DC area was in lockdown and there was a blackout in response to the Black Lives Matter protests. Posts with the hashtag #DCBlackout began piling up in my feed, and at first they appeared to be from ordinary citizens reporting from the ground.

Advertisement X

But then I noticed some red flags that made the #DCBlackout reports seem suspicious.

First of all, many people posting under the hashtag used the exact same words to describe what was happening, as if they were copy-­pasting from a script. And then posts with the #DCBlackout hashtag started sharing a new message, which was that law enforcement was blocking cell phone access to prevent them from sharing the truth.

That’s when I knew #DCBlackout was a ruse. How could people be posting on their phones from the scene of the protests if their cell signals were blocked?

Despite this logical inconsistency, rumors of the blackout spread quickly, resulting in 500,000 tweets with the #DCBlackout hashtag. Some included fake images of a fire near the Washington Monument, which came from the (fictional) TV series Designated Survivor.

Reputable news sources moved to counter the blackout rumors. And that’s when the second wave of the psyop began to inundate my feed. Hundreds of posts with a new hashtag, #DCSafe, started to pop up, denying that the DC blackout was happening.

Unlike the tweets from news reporters debunking the blackout, these were clearly copy-­pasted by psywarriors or maybe bots. They all used the exact same, quirky phrasing:

yeah . . . as someone seeing #dcblackout trending, who lives and works in the DC metro area, and who has friends telecommuting into DC rn. . . . . This hashtag looks like misinformation. “No social media from DC” because we were asleep. Stop scaring people. #dcsafe

It was a psyop designed to look like a psyop. People on Twitter quickly noticed the duplicate tweets coming from a wide variety of accounts, and called them out as fake. But why would a group of psyops agents want their targets to figure out that they were being fooled?

The answer is that they didn’t. By making the “debunking” tweets from #DCSafe so obviously inauthentic, the operatives also cast doubt on the real tweets from journalists at NPR and elsewhere. The #DCSafe operation made it seem like some shady group was trying to cover up what was happening in D.C., and that the mainstream media was in on it. Anyone debunking #DCBlackout looked suspicious by association, as if they were part of a psyop.

It was a brilliantly nefarious move: an influence operation to arouse fear over a fake blackout, and then to arouse fear over a fake coverup of the fake blackout.

The DCBlackout/DCSafe operation was an example of what disinformation experts call “coordinated inauthentic behavior.” Though nobody has identified the perpetrators of this particular psyop, it was certainly coordinated by operatives—­possibly foreign—­and then amplified by regular people tuning in to the hashtag. The first tweet to use the #DCBlackout hashtag had only three followers, but then it got amplified by accounts with greater reach and went massively viral in a matter of minutes. I was left wondering how that happened, and so quickly.

I wrote Stories Are Weapons in order to find out. I discovered psychological warfare has no known origin story. By the time the Chinese classic The Art of War was written, likely 2,500 years ago, the practice was already widely used. In the 19th century, militaries realized that uncertainty and chaos could be weaponized: When an enemy is confused by multiple conflicting accounts of what’s happening, they are vulnerable and easily manipulated. In the 20th century, militaries around the world created departments dedicated to psychological war (or “psywar”). As a 2014 U.S. Army teaching manual for “military information support operations” says, their goal is to target foreign audiences “to elicit behaviors favorable to U.S. national objectives.”

In the 21st century, we’ve seen psywar techniques increasingly deployed against Americans—at first by foreign governments, but more and more by Americans against Americans. No matter who is responsible or what media they use, these activities all share the same goal: whipping up emotions against an enemy. There are three major psychological weapons that combatants often transfer into culture war: scapegoating, deception, and violent threats. These weapons are what separate an open, democratic public debate from a psychological attack.

In a militarized culture war, combatants will scapegoat specific groups of Americans by painting them as foreign adversaries; next, these culture warriors will lace their rhetoric with lies and bully their adversaries with threats of violence or imprisonment.

In Stories Are Weapons, I look at how this weapons transfer took place in some of the past century’s devastating culture wars over American identity, zeroing in on conflicts over race and intelligence; school board fights over gay, lesbian, bisexual, and transgender students; and activist campaigns to suppress feminist stories. In every case, we see culture warriors singling out specific groups of Americans, like Black people or transgender teens, and bombarding them with psyops products as if they were enemies of the state.

As a result, increasingly, Americans are not engaging in democratic debate with one another; they are launching weaponized stories directly into each other’s brains. But we have the power to decommission those weapons. There is a pathway to peace, and that starts with the idea of psychological disarmament.

My book is a story about how one nation, the United States, turned people’s minds into metaphorically blood-­soaked battlegrounds—­and how we, the people, can put down our weapons and build something better. Here are some of the solutions I highlight.

Create an early warning system

In the lead-­up to the 2016 presidential race, Russian operatives used Facebook to reach over 126 million Americans with highly targeted ads, content, and memes. Their intent was to create chaos—but also to discourage Black people from voting. 

Book cover for Stories Are Weapons This essay is adapted from Stories Are Weapons: Psychological Warfare and the American Mind (W. W. Norton, 2024, 272 pages).

I wanted to know what the future of this psychological arms race would look like, so I got in touch with Alex Stamos, the former chief security officer at Facebook, who spotted the 2016 election psyops campaigns as they unfolded. Stamos was the founding director of the Stanford Internet Observatory, a nonpartisan, interdisciplinary group of researchers who advise government and industry about misinformation campaigns online, as well as other trust and safety issues.

For the 2020 election cycle, the Observatory joined a coalition of other research groups called the Election Integrity Partnership (EIP) to track misinformation on social media. The EIP built a simple online reporting system that allowed election workers, cybersecurity experts, academic researchers, and nonprofit citizens’ groups to file “tickets” reporting election dis-­ and misinformation as it flew by on eight different social media platforms. (Disinformation is intentionally false; misinformation is inaccurate or mistaken.) After intensive analysis, the EIP concluded that online influence campaigns were a driving force behind the January 6, 2021, insurrection at the Capitol.

I met Stamos at an Arab street-food spot in San Francisco, where we talked about the state of digital psyops over a bowl of hummus. I asked him what had changed between the presidential election cycles in 2016 and 2020, both of which included significant digital propaganda.

He said one obvious difference was the source of influence operations: In 2015, a significant number of operations were foreign, but in 2020 most were done by Americans to Americans. The 2016 campaign taught the big platforms like Facebook how to spot and shut down most of the foreign election meddling.

However, as Stamos put it, they hadn’t figured out how to stop “Trump and his allies priming everyone to believe the election would be stolen.” As a result, there were stochastic influence operations coming from all sides. Ordinary citizens, primed by political leaders to see conspiracies everywhere, spread disinformation as eagerly as paid propagandists.

Is there a solution? To prevent election misinformation, Stamos suggested, the public needs to interrupt what experts call an “influence operation kill chain.” This is a series of steps—­or links in the chain—­that operatives go through as they escalate their influence operations. At any point in that chain, the public and social media platforms can step in and stop influence operations before they infect the public sphere with confusion. We can shut down fake pages, or prevent misinformation from going viral.

After the presidential election in 2024, EIP wrote up a report, including basic guidelines to help social media companies deal with election misinformation in the future. Hopefully, this report will make it easier for the public to recognize and interrupt influence operation kill chains next time.

The EIP suggested that the federal government focus on misinformation as a part of election security, developing standards for alerting media and the public to active influence campaigns. Social media platforms would have to become more proactive in their efforts to label misinformation, they said, using consistent messages when labeling dis-­ and misinformation.

Ultimately, the biggest takeaway from the EIP’s report was that communication lines need to be wide open between local governments, election workers, industry, and citizens’ groups.

“One of the core things we were trying to do was alert public officials about misinformation so they could [tell constituents] what was a rumor and what was true. We would say someone is saying the polls are closed, and maybe you should send out a message that they are still open. That’s not censorship. It’s addressing a lie that people are telling.”

Learning to slow down

We also need to rebuild social media systems to prioritize human choice rather than algorithmic chaos.

I talked about this idea with Safiya Umoja Noble, a UCLA professor who studies algorithmic bias and wrote the influential book Algorithms of Oppression. She likes to compare doomscrolling on social media to other addictive behaviors, like smoking.

Smoking is a good parallel because many regions now restrict where people can smoke and require companies to put warning labels on cigarettes. You can still light up a cigarette, but there’s friction involved in the process. She imagined regulators using anti-­smoking laws as a model for social media regulations.

For example, companies could be forced to limit the notifications that platforms use to draw people back into looking at their socials. No more cute messages popping up on your phone, drawing you back into Meta products like Instagram and Threads. Limits could also be placed on “for you” or “you might like” suggestions that keep people clicking.

  • Bridging Differences Course

    Free and open now—and proceed at your own pace! Learn research-based strategies for connecting across divides. Join us to bridge differences in your work, campus, community, and life.

    Register

Noble told me that social media companies could also put the brakes on sharing our data. People need to know where our content is going, as well as where it comes from. Facebook sometimes acts as a data broker, selling profile information to advertisers and other third-­party companies. When people post, they should do it knowing whether their photos will be used to train facial-recognition algorithms or their words consumed by large language models like ChatGPT.

We need to supplement these changes with new approaches to the algorithms that control what we see online. Companies could measure engagement in morale boosting rather than doomscrolling.

As security technologist Bruce Schneier put it, “We need to become reflexively suspicious of information that makes us angry at our fellow citizens.” Future systems of media could help us talk to one another again, by slowing down and considering our words.

Imagining alternatives

“I could not think of a way to have a good future with the internet as it exists today,” Ruth Emrys Gordon told me from her home office near Washington, D.C.

Gordon leads two lives. After earning a Ph.D. in cognitive science, she has studied ways to defend Americans against what she calls “malign influence” or “cognitive attacks.” She is also science-fiction author Ruthanna Emrys, who imagines alternate scenarios in fantastical or future civilizations that echo those on Earth today.

I called her up to talk about her recent novel A Half-­Built Garden, an alien first-contact story where humans have created a better way of communicating online using what she calls “dandelion networks.”

“I wrote about an internet that has been broken into smaller networks with stronger protections between them to slow down discourse a bit and provide incentives for people to think about the value and accuracy of what they were sharing,” she said. Her thought experiment in the novel is based on what she’s learned over years of analyzing what has gone wrong in the digital public sphere.

In her novel, humanity has to manage Earth’s ailing ecosystems or, as the alien visitors warn, we will die. She imagines a democratic debate about how to care for a watershed as it blooms across the dandelion network, calling upon only the people who live within the watershed or affect its health. Everything in the network is measured and remeasured, so that the participants’ conversations are fact-based, without the confusion of propaganda. Algorithms aid them by surfacing moments of agreement and reasonable compromise and by making sure that minority voices are heard.

As Gordon put it, we need stories because they offer us something to work toward. She imagined that a state of psychological disarmament would require us to meet in person more often, the way people often do at the house where she lives with her wife and children.

“We have a nice porch and have a tendency to feed people,” she said, laughing. She said her neighborhood is extremely diverse and “we disagree about a lot of things, but we still say hi to each other when they walk their dogs and still bring food trays when they’re sick or have a baby.”

It’s one thing to say this, but quite another thing to see and feel it in a story. In her novel, we witness a world where people care for a watershed whose needs are addressed as if it were a member of the community. The dandelion network brings those future people together in an America where we do not reach consensus by threatening one another with death—instead, we promise one another a better life.

Achieving psychological peace doesn’t always require us to tell new stories, of the kind Gordon writes. Instead, it involves understanding how many of our social interactions are shaped by the stories we’ve heard. It’s about recognizing weaponized stories when they come flying at us, instead of accepting them as factual or unquestionably good. Above all, it requires a commitment to each other’s peace and well-being.

GreaterGood Tiny Logo Greater Good wants to know: Do you think this article will influence your opinions or behavior?

You May Also Enjoy

Comments

blog comments powered by Disqus