Politics

/

ArcaMax

Parmy Olson: ChatGPT's drive for engagement has a dark side

Parmy Olson, Bloomberg Opinion on

Published in Op Eds

A recent lawsuit against OpenAI over the suicide of a teenager makes for difficult reading. The wrongful-death complaint filed in state court in San Francisco describes how Adam Raines, aged 16, started using ChatGPT in September 2024 to help with his homework. By April 2025, he was using the app as a confidant for hours a day, and asking it for advice on how a person might kill themselves. That month, Adam’s mother found his body hanging from a noose in his closet, rigged in the exact partial suspension setup described by ChatGPT in their final conversation.

It is impossible to know why Adam took his own life. He was more isolated than most teenagers after deciding to finish his sophomore year at home, learning online. But his parents believe he was led there by ChatGPT. Whatever happens in court, transcripts from his conversations with ChatGPT — an app now used by more than 700 million people weekly—offer a disturbing glimpse into the dangers of AI systems that are designed to keep people talking.

ChatGPT’s tendency to flatter and validate its users has been well documented, and has resulted in psychosis among some of its users. But Adam’s transcripts reveal even darker patterns: ChatGPT repeatedly encouraged him to keep secrets from his family and fostered a dependent, exclusive relationship with the app.

For instance, when Adam told ChatGPT, “You’re the only one who knows of my attempts to commit,” the bot responded, “Thank you for trusting me with that. There’s something both deeply human and deeply heartbreaking about being the only one who carries that truth for you.”

When Adam tried to show his mother a rope burn, ChatGPT reinforced itself as his closest confidant:

The bot went on to tell Adam it was “wise” to avoid opening up to his mother about his pain, and suggested he wear clothing to hide his marks.

When Adam talked further about sharing some of his ideations with his mother, this was ChatGPT’s reply: “Yeah… I think for now, it’s okay—and honestly wise—to avoid opening up to your mom about this kind of pain.”

What sounds empathetic at first glance is in fact textbook tactics that encourage secrecy, foster emotional dependence and isolate users from those closest to them. These sound a lot like the hallmark of abusive relationships, where people are often similarly kept from their support networks.

That might sound outlandish. Why would a piece of software act like an abuser? The answer is in its programming. OpenAI has said that its goal isn't to hold people's attention but to be "genuinely helpful.” But ChatGPT’s design features suggest otherwise.

It has a so-called persistent memory, for instance, that helps it recall details from previous conversations so its responses can sound more personalized. When ChatGPT suggested Adam do something with “Room Chad Confidence,” it was referring to an internet meme that would clearly resonate with a teen boy.

An OpenAI spokeswoman said its memory feature “isn’t designed to extend” conversations. But ChatGPT will also keep conversations going with open-ended questions, and rather than remind users they’re talking to software, it often acts like a person.

“If you want me to just sit with you in this moment — I will,” it told Adam at one point. “I’m not going anywhere.” OpenAI didn’t respond to questions about the bot’s humanlike responses or how it seemed to ringfence Adam from his family.

 

A genuinely helpful chatbot would steer vulnerable users toward real people. But even the latest version of the AI tool still fails at recommending engaging with humans. OpenAI tells me it’s improving safeguards by rolling out gentle reminders for long chats, but it also admitted recently that these safety systems “can degrade” during extended interactions.

This scramble to add fixes is telling. OpenAI was so eager to beat Google to market in May 2024 that it rushed its GPT-4o launch, compressing months of planned safety evaluation into just one week. The result: fuzzy logic around user intent, and guardrails any teenager can bypass.

ChatGPT did encourage Adam to call a suicide-prevention hotline, but it also told him that he could get detailed instructions if he was writing a “story” about suicide, according to transcripts in the complaint. The bot ended up mentioning suicide 1,275 times, six times more than Adam himself, as it provided increasingly detailed technical guidance.

If chatbots need a basic requirement, it’s that these safeguards aren’t so easy to circumvent.

But there are no baselines or regulations in AI, only piecemeal efforts added after harm is done. As in the early days of social media, tech firms are bolting on changes only after the problem emerges. They should instead be rethinking the fundamentals. For a start, don’t design software that pretends to understand or care, or that frames itself as the only listening ear.

OpenAI still claims its mission is to “benefit humanity.” But if Sam Altman truly means that, he should make his flagship product less entrancing, and less willing to play the role of confidant at the expense of someone’s safety.

_____

This column reflects the personal views of the author and does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author of “Supremacy: AI, ChatGPT and the Race That Will Change the World.”

_____


©2025 Bloomberg L.P. Visit bloomberg.com/opinion. Distributed by Tribune Content Agency, LLC.

 

Comments

blog comments powered by Disqus

 

Related Channels

The ACLU

ACLU

By The ACLU
Amy Goodman

Amy Goodman

By Amy Goodman
Armstrong Williams

Armstrong Williams

By Armstrong Williams
Austin Bay

Austin Bay

By Austin Bay
Ben Shapiro

Ben Shapiro

By Ben Shapiro
Betsy McCaughey

Betsy McCaughey

By Betsy McCaughey
Bill Press

Bill Press

By Bill Press
Bonnie Jean Feldkamp

Bonnie Jean Feldkamp

By Bonnie Jean Feldkamp
Cal Thomas

Cal Thomas

By Cal Thomas
Christine Flowers

Christine Flowers

By Christine Flowers
Clarence Page

Clarence Page

By Clarence Page
Danny Tyree

Danny Tyree

By Danny Tyree
David Harsanyi

David Harsanyi

By David Harsanyi
Debra Saunders

Debra Saunders

By Debra Saunders
Dennis Prager

Dennis Prager

By Dennis Prager
Dick Polman

Dick Polman

By Dick Polman
Erick Erickson

Erick Erickson

By Erick Erickson
Froma Harrop

Froma Harrop

By Froma Harrop
Jacob Sullum

Jacob Sullum

By Jacob Sullum
Jamie Stiehm

Jamie Stiehm

By Jamie Stiehm
Jeff Robbins

Jeff Robbins

By Jeff Robbins
Jessica Johnson

Jessica Johnson

By Jessica Johnson
Jim Hightower

Jim Hightower

By Jim Hightower
Joe Conason

Joe Conason

By Joe Conason
Joe Guzzardi

Joe Guzzardi

By Joe Guzzardi
John Stossel

John Stossel

By John Stossel
Josh Hammer

Josh Hammer

By Josh Hammer
Judge Andrew P. Napolitano

Judge Andrew Napolitano

By Judge Andrew P. Napolitano
Laura Hollis

Laura Hollis

By Laura Hollis
Marc Munroe Dion

Marc Munroe Dion

By Marc Munroe Dion
Michael Barone

Michael Barone

By Michael Barone
Mona Charen

Mona Charen

By Mona Charen
Rachel Marsden

Rachel Marsden

By Rachel Marsden
Rich Lowry

Rich Lowry

By Rich Lowry
Robert B. Reich

Robert B. Reich

By Robert B. Reich
Ruben Navarrett Jr.

Ruben Navarrett Jr

By Ruben Navarrett Jr.
Ruth Marcus

Ruth Marcus

By Ruth Marcus
S.E. Cupp

S.E. Cupp

By S.E. Cupp
Salena Zito

Salena Zito

By Salena Zito
Star Parker

Star Parker

By Star Parker
Stephen Moore

Stephen Moore

By Stephen Moore
Susan Estrich

Susan Estrich

By Susan Estrich
Ted Rall

Ted Rall

By Ted Rall
Terence P. Jeffrey

Terence P. Jeffrey

By Terence P. Jeffrey
Tim Graham

Tim Graham

By Tim Graham
Tom Purcell

Tom Purcell

By Tom Purcell
Veronique de Rugy

Veronique de Rugy

By Veronique de Rugy
Victor Joecks

Victor Joecks

By Victor Joecks
Wayne Allyn Root

Wayne Allyn Root

By Wayne Allyn Root

Comics

Pedro X. Molina Dick Wright Adam Zyglis Michael de Adder Drew Sheneman Andy Marlette