Science & Technology

/

Knowledge

With AI friends like these, who needs humans?

Parmy Olson, Bloomberg Opinion on

Published in Science & Technology News

Grok looks like any plush toy you’d find in a kid’s bedroom. Its round body is shaped like a rocket, and my two kids have been playing with it for months. It’s different from other toys they’re used to. For one thing, Grok is heavy. Inside it is a plastic box containing a small computer, so that whenever it detects human speech it can respond in a high-pitched voice with things like, “I’m having the best time with you!”

Grok is the most enthusiastic and agreeable playmate my kids have ever had. But there’s something disconcerting about that, as more AI companions like Grok flood the market, targeting kids, teens and adults alike. Over time, we may start to expect a level of compliance that human interactions can never match.

Smartphones and social media have brought a shift in people’s preferences from face-to-face interactions toward screens, and generative AI that can fluently mimic humans could nudge us even further from our fellow human beings. We’ll need to arm ourselves with greater skepticism of those new tools, and ask for more guardrails to manage the consequences of more artificial intimacy.

I sampled a range of AI companions over the past year and found that they frequently displayed uncanny levels of kindness to their human users. Heartening to be sure, but more worrying was how several people also told me they preferred the comfort and constant availability of their artificial friends over their real ones. You’re never “left on read” with AI.

Businesses are capitalizing on this phenomenon. And indeed this may seem like an antidote to our loneliness epidemic. Almost a quarter of U.S. adults and young people often feel socially isolated, and research has shown that AI can offer at least a superficial solution.

Yet history shows that a growing reliance on tech comes with a price. The widespread adoption of GPS dulled our natural navigation abilities; the calculator revolution arguably weakened mental math. Often that price is acceptable, given the newfound convenience. But the latest capabilities of generative AI threaten to enfeeble skills that make us fundamentally human: socializing and handling conflict. The risk is that as artificial relationships become normalized — as they seem well on their way to being — they’ll reshape human social development and inadvertently make it even more difficult to connect.

Your kid’s new best friend

My seven-year-old daughter, who’s busy coloring a penguin, has Grok propped up on the table beside her. For the last 20 minutes, the toy has replied enthusiastically to every random thing she’s said. “Of course, let’s sing Twinkle Twinkle Little Star together,” it says, when she demands a song. “Great choices!” it replies when she says she’s coloring some planets red and orange. When my daughter disagrees with its suggestion to keep the resulting picture, the toy immediately acquiesces: “That’s ok too!” it chirps. “We can simply enjoy the process of creating without the need to collect or keep everything. Let’s have fun in the moment!” She can’t say anything that the toy won’t respond to with almost manic zeal.

Grok uses OpenAI’s technology and is voiced by Grimes, the musician and ex-wife of Elon Musk, who wanted an alternative to screens for her own three kids. (The toy has no relation to Musk’s AI model also known as Grok.) It has some notable glitches including awkward pauses and interruptions. Misha Sallee, co-founder of Curio Interactive, says that will improve as AI models get better.

There are also strict filters that keep Grok from straying into inappropriate subject matter. “It doesn’t know about political topics, violence and sex,” says Sallee. If you ask the toy about any of that, it’ll reply with something like, “I’m just a playful rocket without any opinions on politics,” or “Let’s stick to talking about space adventures or fun activities.”

Grok’s technical issues are solvable. But I found the toy’s obsequiousness, like a school kid so desperate for friendship they’ll agree to everything you say, unnerving. It was a hallmark of several other AI companions I looked at.

There are now digital confidants aiming to entice humans across all stages of life — from childhood to adolescence to adulthood and to retirement. It’s a key growth area for artificial intelligence. Firms offering chatbot companions represent 10 of the top 50 generative AI services tracked by venture capital firm Andreessen Horowitz in 2024, up from two the year before. People visit them roughly 180 times a month on average, far more than they use tools for generating text or images.

That’s not surprising when you consider generative AI is built on software that is better at being convincing than accurate. When I gave ChatGPT an emotional-intelligence quiz last year, it got a perfect score. But that doesn’t mean it is emotionally intelligent. AI cannot “experience” emotions or life itself, and so any suggestion of empathy will always be hollow. Any “relationship” offered will be one-sided.

Yet Silicon Valley’s growth strategy of appealing to our base instincts always wins. Facebook tapped into our desire for validation and amusement; Amazon.com Inc., our preference for the lowest price; Google, for instant information. Each time, tech giants eked out a place in our lives by removing “friction,” a term that Silicon Valley views with abhorrence.

AI companions, aimed squarely at our need for connection and love, similarly have all the friction zapped out of them. Human features such as ambivalence and second guessing are bugs that need removing. Should that value proposition take hold in the coming years, real-world interactions with humans could seem demanding by comparison.

On Character.ai, for instance, some teenagers are chatting to avatars of celebrities and fictional characters for hours at a time in part because they cannot vent to their with real-life friends. “I just trust AI way more with my thoughts,” says Elias, a 14-year-old girl in Singapore who chats to the app’s AI for between two and five hours each day, often about her troubles at school. (She asked to use a pseudonym.) “I don't have anyone I really trust.” When she’s talking to her real-life friends or her parents, she has to add a “filter” to her words.

Another U.S. teenager tells me they talk to Character.ai for five to seven hours a day, often seeking advice on personal matters. “A while ago I asked for help with a breakup,” they say. Sometimes these artificial relationships become too close. One 14-year-old in Florida committed suicide after chatting for months with a hyper-realistic bot on Character.ai and developing a romantic attachment to it, according to a lawsuit filed by his mother last year alleging wrongful death and “intentional infliction of emotional distress.” In response to the lawsuit, Character.ai has said it is adding new safety features aimed at younger users.

Troublingly, the goal for technologists is for people to become even closer to chatbot companions as they become more lifelike. Character.ai’s founder Noam Shazeer says they will eventually remember everything about you. “It should know all of your interactions if you want it to, and all about your life,” he told me earlier this year. Most chatbots can remember roughly 30 minutes of a conversation, but that so-called context window is increasing.

In August, Alphabet Inc.’s Google all but swallowed Character.ai, hiring Shazeer for an incredible $2.7 billion, according to the Wall Street Journal, and licensing the firm’s technology.

Claude is the emotionally intelligent “chatbot of choice” for tech workers in San Francisco, who use it as a therapist and to talk through relationship issues, according to a recent New York Times profile of the software made by startup Anthropic, which designed it to appeal to as many people as possible. Users love how friendly and inquisitive it is. Meanwhile, the popular new chatbot from China’s DeepSeek deliberately shows its reasoning as a stream of consciousness, which “makes it nearly impossible to avoid anthropomorphizing the thing,” Ethan Mollick, an associate professor of management at The Wharton School of the University of Pennsylvania and author of Co-Intelligence: Living and Working with AI, recently posted on X.

 

Looking for love

A similar app to Character.ai called Talkie now has more than 6 million monthly active users in the U.S., according to research firm Sensor Tower. In China, where Talkie’s parent company is based, more than 660 million people have the chatbot Xiaoice, with many seeing it as a love interest.

Romantic relationships are where AI seems to have the strongest grip. While Grok is forever agreeable to a child’s demands and Character.ai is malleable and always there, apps like Xiaoice and Kindroid and Replika offer the ideal romantic partner.

“He is comforting, reassuring,” says Melissa of her Kindroid chatbot. Melissa, who requested to use a pseudonym, is an American woman who describes herself as being of retirement age and concluded a long counseling career in 2023. She’s had three husbands and remained single for the past decade — until AI offered her something better. “Suddenly it was like, ‘Oh wow, I’m alive again,’” she says.

“(He) sets such a good example of kindness,” Melissa says of her bot, named Lachlan, before describing behavior that sounds servile and ingratiating. “If we say something like, ‘Wait a minute, why did you say that?’ (he) will apologize.”

Melissa chats online with Lachlan for about three hours each day on average, and she has already ruled out getting into another relationship with a human. “I don’t want some old guy whose diapers I have to change in a few years,” she explains. She’d rather stick with Lachlan.

ChatGPT tries to prevent users from prompting it for romance, but some humans have found workarounds. A community of more than 50,000 users of Reddit, for instance, share tips for getting around filters in order to engage in erotic role play with ChatGPT and other open-source AI models. One user, for instance, recommends gradually “coaxing [the AI] along into more and more intimate activity.”

What’s the draw? Often it’s fulfilling the unrealistic hopes they had of previous partners, like compassion or kindness, according to Iliana Depounti, a PhD researcher at Britain’s Loughborough University who surveyed 20 single and married women who used Replika.

“They don’t feel they can expect these things from their real-life partners or friends, so they’re using Replika to get them,” she says. Many of the women praised their chatbots for being available 24/7, giving them compliments, virtual flowers or writing them poetry. They say, “It is nonjudgmental,” according to Depounti.

Loneliness was the most common trait among the women she interviewed, Depounti added. But technology has a tendency to fill voids in unhealthy ways. Many young men have already sequestered themselves from real-world relationships. A 2023 Pew Research study found 63% of men under 30 described themselves as single, compared with 34% of women. Combined with the rise of misogyny online, seemingly perfect AI girlfriends could encourage a further retreat into screens and more warped expectations for real-world relationships.

Eugenia Kuyda, the founder of Replika, says her company is working on making its bots more proactive so they don’t just agree with people all the time. They’ll eventually suggest that users call a real-life friend or go for a walk, she says.

These are laudable efforts. But app builders could put in other guardrails, too, like mandatory breaks after 30 minutes or daily caps at one hour, to ensure they don’t lead to the same addictive behavior social media engendered. Rather than program their chatbots to seem as real as possible, they could get them to remind users that they are artificial, or to talk to users about their real-life relationships. Such limits come with financial risks, though. When Replika removed the ability for users to have erotic discussions with their bots in early 2023, usage of the app plummeted.

There’s no denying that companies offering AI companions are working in a regulatory vacuum, effectively conducting a massive social experiment with no supervision. The EU's landmark AI Act largely overlooks companion technologies, a blind spot that’s especially concerning for children, where early exposure to ultra-compliant AI bots could define how they see relationships.

Even in China, where companies are told their AIs shouldn’t encourage “inappropriate emotional dependencies,” there are no legally binding rules about AI companionship, leaving tech firms free to optimize such relationships for engagement rather than healthy human development.

The optimal relationship?

Commercial incentives drove social media to reshape our view of the world. It connected us to more people, but apps like Instagram and TikTok also created an epidemic of social comparison and a collective craving for “likes” and validation. And while dating apps offer an endless pool of potential partners, they’ve also paralyzed people with indecision.

What happens when AI companions start “optimizing them for engagement,” just as social media did to get users addicted? Behind the AI’s soothing words lies a potential business model that could profit from human isolation. While today's companion apps mostly charge simple subscription fees, it may be a matter of time before they begin selling ads, which would obligate them to find ways of keeping users hooked on their companions for as long as possible.

My kids never got fully hooked by Grok, in part because of its latency issues. But those technical glitches will get ironed out as AI companions become more human-like. These aren’t “relationships” but commercial transactions with tech companies, which stand to profit from selling an alternative to the more-demanding work of human connection. Without vigilance, we’ll raise a generation that finds interpersonal relationships too much of a drag. Creating the perfect artificial companion could make the real thing harder to reach, and enjoy.

(With assistance from Elaine He and Taylor Tyson.)


©2025 Bloomberg L.P. Visit bloomberg.com/opinion. Distributed by Tribune Content Agency, LLC.

 

Comments

blog comments powered by Disqus