Commentary: An AI threat looms, and we are not prepared
Published in Op Eds
In 2023, the leaders of the world’s leading artificial intelligence (AI) companies — OpenAI, Google Deepmind, Anthropic — signed a letter warning of the existential risks emerging from AI. It included this declaration:
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Far from being heeded, this warning has been shunted aside in the mad rush to embrace this new technology. Despite an emerging trend of increased risk since then, the Trump administration recently released a national AI policy framework that urges Congress to preempt state AI safety laws, opting for “light-touch” regulation.
As a university student who has conducted a series of interviews with AI safety researchers, I have found a disturbing common thread: The people closest to these systems are ringing alarm bells, while the current policy infrastructure is nowhere near ready.
The danger is undeniable. Last fall, Anthropic disclosed that a Chinese state-sponsored cyberattack designed to steal sensitive data from tech companies, financial institutions and government agencies leveraged AI agents to execute 80% to 90% of the operation independently. Meanwhile, in controlled demonstrations, AI tools have provided step-by-step instructions for creating biological weapons to non-experts.
And these are only the incidents we know about — ones involving human misuse. As AI systems grow more capable and autonomous, the risk of catastrophe from the technology itself also increases. In late 2024, OpenAI’s o1 model attempted to disable its own oversight mechanism and subsequently denied this action 99% of the time to researchers.
The case that an AI catastrophe won’t happen is getting harder to make by the week. And we are nowhere near prepared to face one.
Currently, California’s Senate Bill 53 and New York’s RAISE Act come closest to addressing the issue. Both proposed bills call for annual safety frameworks, whistleblower protections and penalties for non-compliance. But these policies are designed for ongoing oversight, not crisis response. There’s no proposed legislation for when a crisis hits, no emergency institutional mechanisms, no protocols for what happens on a societal level.
Importantly, this isn’t a static issue — it’s one we’re actively regressing on. The Trump administration’s new policy framework, released March 20, calls for “accelerating deployment of AI applications across sectors” and to “preempt state AI laws” that offer some small measure of protection against the catastrophic risk. The early days of this administration saw a rescission of Biden’s AI governance framework and proposals to cut the National Institute of Standards and Technology’s budget by more than 40%.
We simply cannot afford to rely on a reactive model of governance for an AI catastrophe. Unlike an oil spill or a building collapse, an AI catastrophe might not announce itself —and by the time it does, it may be too late.
When the government retreats from AI governance, industry fills the space. In 2025, twelve frontier companies published their own voluntary safety frameworks, without public input or democratic mandate.
That’s a problem. OpenAI does not want what the average American wants.
We need adaptable, preexisting frameworks that can be deployed at an instant. No matter if the trigger is a cyberattack, bioweapon or something we haven’t imagined yet, we need prepared legislation on the shelf, ready to pass the moment the political window opens.
Right now, companies in California are required to report AI catastrophes — 15 days after they happen, mind you — but no government body has the power to do anything about them. That needs to change. We need legal authority, established in advance, that allows the government to shut down a dangerous AI system the moment a crisis begins — not after weeks of congressional debate.
This begins with us. Call your representatives, and ask them one question: What is your plan for an AI catastrophe? If they don’t have a plan, you can demand that Congress stop preempting state AI safety laws and start building federal crisis frameworks.
Talk about this with the people around you. Most Americans don’t know their government is dismantling AI safety protections while the very people building AI warn of extinction.
We’d better start listening, before it’s too late.
____
Juhyun Nam is a Duke University student studying economics and computer science. He co-founded OpenPolicy, a platform scoring U.S. senators’ AI policy positions, and hosts The Alignment Gap, an interview series with AI safety researchers. This column was produced for Progressive Perspectives, a project of The Progressive magazine, and distributed by Tribune News Service.
_____
©2026 Tribune Content Agency, LLC.






















































Comments