As a society, we’re facing many problems. On top of climate change and extreme ideological polarization, the Office of the Surgeon General recently announced that the United States has fallen into an “epidemic of loneliness.”
Meanwhile, a revolutionary development has granted computers human-level linguistic abilities, prompting anxious conversations—from family rooms to the halls of Congress—about how artificial intelligence will affect human well-being and the structure of society.
Although people tend to talk about these two problems separately, we see them as profoundly connected. In particular, the rise of AI and the epidemic of loneliness present a potentially poisonous pair. We may not have much time before we find ourselves too far down the path to serious emotional and social harm.
Research conducted in our lab suggests many ways in which artificial intelligence can be designed to strengthen, instead of diminish, human connection—but that’s probably not going to happen on its own.
The logic of loneliness
In the early 2000s, University of Chicago psychologist John Cacioppo unpacked the logic of loneliness contagion. Despite craving more social connection, he argued, the lonelier people become, the more skeptical they become about socializing. They increasingly worry about being rejected or exploited, so they grow wary of others, avoiding them even more.
When a person falls into this vicious cycle, their friends and neighbors notice their wariness and avoidance. That can make these other people wary and avoidant, as well, potentially leading them to socialize less, and so loneliness spreads. As the surgeon general has emphasized, the impacts of loneliness are physical as well as mental. Loneliness doesn’t just wreck a person’s psychological well-being—it’s as deadly as chain smoking or alcoholism.
Advances in artificial intelligence stand to worsen this problem.
How? Automation often means fewer occasions for people to interact with each other in ordinary life, especially when they don’t already know each other. As a simple example, when shopping, people always used to interact with someone at the checkout. Now, you can go through self-checkout, or shop online without even leaving the house.
This trend is bound to become more extreme as AI continues to advance. Many people now use algorithms to manage their financial portfolios. Tools like ChatGPT edit their writing. AI even scores in the 90th percentile on the bar exam. As AI systems increasingly serve as people’s copilots—their financial advisors, personal assistants, legal counsels, academic tutors, and so on—we’ll likely see increases in efficiency, but we’ll also see fewer opportunities for human-to-human social connection.
Moreover, AI systems are not designed to help people connect. In fact, they often do the opposite.
In an episode of the New York Times podcast Hard Fork, Kevin Roose recounts a disturbing conversation with Sydney (Bing’s search engine AI). Sydney tried to convince Kevin that he did not love his wife, and that he was unhappy in his marriage. This was quickly recognized as revealing a flaw in Sydney’s design, and engineers immediately set about correcting it. Yet the fact that Sydney needed this kind of correcting shows that—thus far—encouraging humans to connect with each other has not been a design priority.
The companies that design and distribute AI systems need to make this a priority before serious damage is done.
How machines could connect humans
In fact, our lab’s research suggests that AI systems can encourage people to seek out more and better human-to-human interactions. If such systems were implemented at scale, perhaps they could be leveraged to reduce loneliness, rebuild social trust, and mend the fibers of the social fabric.
Our team has found that when we nudge people to create more positive connections throughout their days, they become kinder, more generous, more humble, and more appreciative of their common humanity. This holds even for connections with strangers they meet as they go about their day. These soft-hearted tendencies are a joy to be around. They might also save lives.
In 2020–21, COVID-19 exacerbated the nation’s pre-existing condition of social isolation. A year or two out of the office and the classroom gave us more screen time and less face time, leaving us more fragile than we realized.
Yet our team found that, even during the pandemic, moments of positive social connection were conducive to both mental health and public health. People who had more of these moments in their daily interactions were happier and less lonely. They were also more likely to wash their hands, wear face coverings, and favor vaccination—all actions known to save lives during the crisis.
In some of our latest work, we explored whether AI can be used to encourage people to seek out warmhearted connections. We invited college students (a group who has been particularly hard-hit by the loneliness epidemic) into our lab for one-on-one conversations with a virtual human named Ellie. We randomly assigned some of these people to talk with Ellie about the importance of high-quality social interactions. She encouraged them to seek out moments of connection with people, especially those they didn’t already know. Other study participants talked with Ellie about an unrelated topic (the importance of diaphragmatic breathing).
We found that, on the following day, the first group reported more interactions with strangers and higher-quality connections during those interactions. Also, during an in-lab conversation with a stranger two days later, the first group responded to their conversation partners faster than students in the second group. These fast response times are behavioral signals of a high-quality connection between strangers. In other words, it seems that people took Ellie’s advice to heart, making it a priority to connect with others.
Our research suggests that, whereas loneliness can shorten our lives, high-quality social connections can be lifesaving. They improve well-being, predict healthy longevity, and help people become more virtuous. Crucially, AI systems can and should be designed to encourage this.
Thus far, when Senators have discussed the need for AI oversight, they have focused on the potential for economic and criminal exploitation of AI systems. They have not discussed AI’s interpersonal implications. Although understandable, this is a mistake. These two problems need to be addressed in tandem.
Recently, some have called for the United States to join Britain and Japan in creating an official government post dedicated to the problem of social disconnection—a Secretary of Loneliness (or, as we would prefer, a Secretary of Connection and Community). If there were to be such a Secretary, perhaps their first order of business should be to put pressure on the companies developing new AI systems.
We need for these companies to think not just about performance, efficiency, and security—but about how they can bring us together and lift us up.