AI companions are the final stage of digital addiction, and lawmakers are taking aim

On Tuesday, California state senator Steve Padilla will make an appearance with Megan Garcia, the mother of a Florida teen who killed himself following a relationship with an AI companion that Garcia alleges contributed to her son’s death.  The two will announce a new bill that would force the tech companies behind such AI companions…

Apr 8, 2025 - 10:04
 0
AI companions are the final stage of digital addiction, and lawmakers are taking aim

On Tuesday, California state senator Steve Padilla will make an appearance with Megan Garcia, the mother of a Florida teen who killed himself following a relationship with an AI companion that Garcia alleges contributed to her son’s death. 

The two will announce a new bill that would force the tech companies behind such AI companions to implement more safeguards to protect children. They’ll join other efforts around the country, including a similar bill from California State Assembly member Rebecca Bauer-Kahan that would ban AI companions for anyone younger than 16 years old, and a bill in New York that would hold tech companies liable for harm caused by chatbots. 

You might think that such AI companionship bots—AI models with distinct “personalities” that can learn about you and act as a friend, lover, cheerleader, or more—appeal only to a fringe few, but that couldn’t be further from the truth. 

A new research paper aimed at making such companions safer, by authors from Google DeepMind, the Oxford Internet Institute, and others, lays this bare: Character.AI, the platform being sued by Garcia, says it receives 20,000 queries per second, which is about a fifth of the estimated search volume served by Google. Interactions with these companions last four times longer than the average time spent interacting with ChatGPT. One companion site I wrote about, which was hosting sexually charged conversations with bots imitating underage celebrities, told me its active users averaged more than two hours per day conversing with bots, and that most of those users are members of Gen Z. 

The design of these AI characters makes lawmakers’ concern well warranted. The problem: Companions are upending the paradigm that has thus far defined the way social media companies have cultivated our attention and replacing it with something poised to be far more addictive. 

In the social media we’re used to, as the researchers point out, technologies are mostly the mediators and facilitators of human connection. They supercharge our dopamine circuits, sure, but they do so by making us crave approval and attention from real people, delivered via algorithms. With AI companions, we are moving toward a world where people perceive AI as a social actor with its own voice. The result will be like the attention economy on steroids.

Social scientists say two things are required for people to treat a technology this way: It needs to give us social cues that make us feel it’s worth responding to, and it needs to have perceived agency, meaning that it operates as a source of communication, not merely a channel for human-to-human connection. Social media sites do not tick these boxes. But AI companions, which are increasingly agentic and personalized, are designed to excel on both scores, making possible an unprecedented level of engagement and interaction. 

In an interview with podcast host Lex Fridman, Eugenia Kuyda, the CEO of the companion site Replika, explained the appeal at the heart of the company’s product. “If you create something that is always there for you, that never criticizes you, that always understands you and understands you for who you are,” she said, “how can you not fall in love with that?”

So how does one build the perfect AI companion? The researchers point out three hallmarks of human relationships that people may experience with an AI: They grow dependent on the AI, they see the particular AI companion as irreplaceable, and the interactions build over time. The authors also point out that one does not need to perceive an AI as human for these things to happen. 

Now consider the process by which many AI models are improved: They are given a clear goal and “rewarded” for meeting that goal. An AI companionship model might be instructed to maximize the time someone spends with it or the amount of personal data the user reveals. This can make the AI companion much more compelling to chat with, at the expense of the human engaging in those chats.

For example, the researchers point out, a model that offers excessive flattery can become addictive to chat with. Or a model might discourage people from terminating the relationship, as Replika’s chatbots have appeared to do. The debate over AI companions so far has mostly been about the dangerous responses chatbots may provide, like instructions for suicide. But these risks could be much more widespread.

We’re on the precipice of a big change, as AI companions promise to hook people deeper than social media ever could. Some might contend that these apps will be a fad, used by a few people who are perpetually online. But using AI in our work and personal lives has become completely mainstream in just a couple of years, and it’s not clear why this rapid adoption would stop short of engaging in AI companionship. And these companions are poised to start trading in more than just text, incorporating video and images, and to learn our personal quirks and interests. That will only make them more compelling to spend time with, despite the risks. Right now, a handful of lawmakers seem ill-equipped to stop that. 

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.