Politics

/

ArcaMax

Commentary: An AI threat looms, and we are not prepared

Juhyun Nam, Progressive Perspectives on

Published in Op Eds

In 2023, the leaders of the world’s leading artificial intelligence (AI) companies — OpenAI, Google Deepmind, Anthropic — signed a letter warning of the existential risks emerging from AI. It included this declaration:

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Far from being heeded, this warning has been shunted aside in the mad rush to embrace this new technology. Despite an emerging trend of increased risk since then, the Trump administration recently released a national AI policy framework that urges Congress to preempt state AI safety laws, opting for “light-touch” regulation.

As a university student who has conducted a series of interviews with AI safety researchers, I have found a disturbing common thread: The people closest to these systems are ringing alarm bells, while the current policy infrastructure is nowhere near ready.

The danger is undeniable. Last fall, Anthropic disclosed that a Chinese state-sponsored cyberattack designed to steal sensitive data from tech companies, financial institutions and government agencies leveraged AI agents to execute 80% to 90% of the operation independently. Meanwhile, in controlled demonstrations, AI tools have provided step-by-step instructions for creating biological weapons to non-experts.

And these are only the incidents we know about — ones involving human misuse. As AI systems grow more capable and autonomous, the risk of catastrophe from the technology itself also increases. In late 2024, OpenAI’s o1 model attempted to disable its own oversight mechanism and subsequently denied this action 99% of the time to researchers.

The case that an AI catastrophe won’t happen is getting harder to make by the week. And we are nowhere near prepared to face one.

Currently, California’s Senate Bill 53 and New York’s RAISE Act come closest to addressing the issue. Both proposed bills call for annual safety frameworks, whistleblower protections and penalties for non-compliance. But these policies are designed for ongoing oversight, not crisis response. There’s no proposed legislation for when a crisis hits, no emergency institutional mechanisms, no protocols for what happens on a societal level.

Importantly, this isn’t a static issue — it’s one we’re actively regressing on. The Trump administration’s new policy framework, released March 20, calls for “accelerating deployment of AI applications across sectors” and to “preempt state AI laws” that offer some small measure of protection against the catastrophic risk. The early days of this administration saw a rescission of Biden’s AI governance framework and proposals to cut the National Institute of Standards and Technology’s budget by more than 40%.

We simply cannot afford to rely on a reactive model of governance for an AI catastrophe. Unlike an oil spill or a building collapse, an AI catastrophe might not announce itself —and by the time it does, it may be too late.

When the government retreats from AI governance, industry fills the space. In 2025, twelve frontier companies published their own voluntary safety frameworks, without public input or democratic mandate.

 

That’s a problem. OpenAI does not want what the average American wants.

We need adaptable, preexisting frameworks that can be deployed at an instant. No matter if the trigger is a cyberattack, bioweapon or something we haven’t imagined yet, we need prepared legislation on the shelf, ready to pass the moment the political window opens.

Right now, companies in California are required to report AI catastrophes — 15 days after they happen, mind you — but no government body has the power to do anything about them. That needs to change. We need legal authority, established in advance, that allows the government to shut down a dangerous AI system the moment a crisis begins — not after weeks of congressional debate.

This begins with us. Call your representatives, and ask them one question: What is your plan for an AI catastrophe? If they don’t have a plan, you can demand that Congress stop preempting state AI safety laws and start building federal crisis frameworks.

Talk about this with the people around you. Most Americans don’t know their government is dismantling AI safety protections while the very people building AI warn of extinction.

We’d better start listening, before it’s too late.

____

Juhyun Nam is a Duke University student studying economics and computer science. He co-founded OpenPolicy, a platform scoring U.S. senators’ AI policy positions, and hosts The Alignment Gap, an interview series with AI safety researchers. This column was produced for Progressive Perspectives, a project of The Progressive magazine, and distributed by Tribune News Service.

_____


©2026 Tribune Content Agency, LLC.

 

Comments

blog comments powered by Disqus

 

Related Channels

The ACLU

ACLU

By The ACLU
Amy Goodman

Amy Goodman

By Amy Goodman
Armstrong Williams

Armstrong Williams

By Armstrong Williams
Austin Bay

Austin Bay

By Austin Bay
Ben Shapiro

Ben Shapiro

By Ben Shapiro
Betsy McCaughey

Betsy McCaughey

By Betsy McCaughey
Bill Press

Bill Press

By Bill Press
Bonnie Jean Feldkamp

Bonnie Jean Feldkamp

By Bonnie Jean Feldkamp
Cal Thomas

Cal Thomas

By Cal Thomas
Clarence Page

Clarence Page

By Clarence Page
Danny Tyree

Danny Tyree

By Danny Tyree
David Harsanyi

David Harsanyi

By David Harsanyi
Debra Saunders

Debra Saunders

By Debra Saunders
Dennis Prager

Dennis Prager

By Dennis Prager
Dick Polman

Dick Polman

By Dick Polman
Erick Erickson

Erick Erickson

By Erick Erickson
Froma Harrop

Froma Harrop

By Froma Harrop
Jacob Sullum

Jacob Sullum

By Jacob Sullum
Jamie Stiehm

Jamie Stiehm

By Jamie Stiehm
Jeff Robbins

Jeff Robbins

By Jeff Robbins
Jessica Johnson

Jessica Johnson

By Jessica Johnson
Jim Hightower

Jim Hightower

By Jim Hightower
Joe Conason

Joe Conason

By Joe Conason
John Stossel

John Stossel

By John Stossel
Josh Hammer

Josh Hammer

By Josh Hammer
Judge Andrew P. Napolitano

Judge Andrew Napolitano

By Judge Andrew P. Napolitano
Laura Hollis

Laura Hollis

By Laura Hollis
Marc Munroe Dion

Marc Munroe Dion

By Marc Munroe Dion
Michael Barone

Michael Barone

By Michael Barone
Mona Charen

Mona Charen

By Mona Charen
Rachel Marsden

Rachel Marsden

By Rachel Marsden
Rich Lowry

Rich Lowry

By Rich Lowry
Robert B. Reich

Robert B. Reich

By Robert B. Reich
Ruben Navarrett Jr.

Ruben Navarrett Jr

By Ruben Navarrett Jr.
Ruth Marcus

Ruth Marcus

By Ruth Marcus
S.E. Cupp

S.E. Cupp

By S.E. Cupp
Salena Zito

Salena Zito

By Salena Zito
Star Parker

Star Parker

By Star Parker
Stephen Moore

Stephen Moore

By Stephen Moore
Susan Estrich

Susan Estrich

By Susan Estrich
Ted Rall

Ted Rall

By Ted Rall
Terence P. Jeffrey

Terence P. Jeffrey

By Terence P. Jeffrey
Tim Graham

Tim Graham

By Tim Graham
Tom Purcell

Tom Purcell

By Tom Purcell
Veronique de Rugy

Veronique de Rugy

By Veronique de Rugy
Victor Joecks

Victor Joecks

By Victor Joecks
Wayne Allyn Root

Wayne Allyn Root

By Wayne Allyn Root

Comics

Eric Allie Marshall Ramsey Tom Stiglich Gary Varvel A.F. Branco John Deering