Politics

/

ArcaMax

Parmy Olson: Claude AI helped bomb Iran. But how exactly?

Parmy Olson, Bloomberg Opinion on

Published in Op Eds

The same artificial-intelligence model that can help you draft a marketing email or a quick dinner recipe has also been used to attack Iran. U.S. Central Command used Anthropic’s Claude AI for “intelligence assessments, target identification and simulating battle scenarios” during the strikes on the country, according to a report in the Wall Street Journal.

Hours earlier, President Donald Trump had ordered federal agencies to stop using Claude after a dispute with its maker, but the tool was so deeply baked into the Pentagon’s systems that it would take months to untangle in favor of a more compliant rival. It was used, too, in the January operation that led to the capture of Nicolás Maduro.

But what does “intelligence assessments” and “target identification” mean in practice? Was Claude flagging locations to strike or making casualty estimates? Nobody has made that disclosure and, alarmingly, no one has an obligation to.

Artificial intelligence has long been used in warfare for things like analyzing satellite imagery, detecting cyber threats and guiding missile-defense systems. But the use of chatbots — the same underlying technology that billions use for mundane tasks like writing emails — is now being used on the battlefield.

Last November, Anthropic partnered with Palantir Technologies Inc., a data-analytics company that does a lot of work for the Pentagon, turning its large language model Claude into the reasoning engine inside a decision-support system for the military.

Then, in January, Anthropic submitted a $100 million proposal to the Pentagon to develop voice-controlled autonomous drone swarming technology, Bloomberg News reported. The company’s pitch: Use Claude to translate a commander’s intent into digital instructions to coordinate a fleet of drones.

Its bid was rejected, but the contest called for much more than just summarizing intelligence reports, as you might expect a chatbot to do. This contract was to develop “target-related awareness and sharing,” and “launch to termination” for potentially lethal drone swarms.

Remarkably, all of this has been happening in a regulatory vacuum and with technology that is known to make errors. Hallucinations by large language models are a result of their training, when they are rewarded for grasping for an answer instead of admitting uncertainty. Some scientists say the persistent challenge of AI confabulation may never be fixed.

This would not be the first time unreliable AI systems have been used in warfare. Lavender was an AI-driven database used to help identify military targets associated with Hamas in Gaza. It was not a large language model but analyzed vast amounts of surveillance data, such as social connections and location history, to assign each individual a score from 1 to 100. When someone’s score passed a certain threshold, Lavender flagged them as a military target.

The problem was that Lavender was wrong 10% of the time, according to an investigative report published by the Israeli-Palestinian outlet +972. “Around 3,600 people were targeted by mistake,” Mariarosaria Taddeo, a professor of digital ethics and defense technology at the Oxford Internet Institute, tells me.

“There are such incredible vulnerabilities in these systems and such extreme unreliability... for something so dynamic, sensitive and human as warfare,” says Elke Schwarz, a professor in political theory at Queen Mary University London and author of Death Machines: The Ethics of Violent Technologies.

Schwarz points out that AI is often used in war to speed things up, a recipe for unwanted outcomes. Faster decisions are made at a greater scale and with less human scrutiny. The last decade and a half has seen military use of AI become even more opaque, she says.

And secrecy is baked into how AI labs operate even before the warfare applications. These companies refuse to disclose what data their models are trained one or how their systems reach conclusions.

 

Of course, military operations often have to be kept under wraps to protect combatants and keep enemies off the scent. But defense is heavily regulated by international humanitarian law and weapons testing standards, which in theory should also address the use of artificial intelligence. Yet such standards are missing or woefully inadequate.

Taddeo notes that Article 36 of the Geneva Convention requires new weapons systems to be tested before deployment, but an AI system that learns from its environment becomes a new system every time it updates. That makes it almost impossible to apply the rule.

In an ideal world, governments like the U.S. would disclose how these systems are used on the battlefield, and there is a precedent. The Americans started using armed drones after 9/11 and expanded their use under the Barack Obama administration, refusing to acknowledge that such a program existed.

It took nearly 15 years of leaked documents, sustained pressure from the press and lawsuits from the American Civil Liberties Union before the Obama White House finally published in 2016 the casualty numbers from drone strikes. They were widely seen as under-counting, but they allowed the public, Congress and media to hold the government accountable for the first time.

AI’s policing be will harder still, requiring even more public and legislative pressure to force a recalcitrant Trump administration to create a similar kind of reporting framework.

The goal wouldn’t be to disclose exactly how Claude was used in something like Operation Epic Fury, but to release the broad contours, according to Schwarz. And, especially, to disclose when something goes wrong.

The current public debate about the Anthropic-Pentagon feud — about what is legal and ethical for AI when it comes to the mass surveillance of Americans or creating fully autonomous weapons — is missing the bigger question about the lack of visibility of how the technology is already being used in war. With such new and untested systems prone to making mistakes, this is sorely needed. “We haven’t decided as a society if we’re fine with a machine deciding if a human being should be killed or not,” says Taddeo.

Pushing for that transparency is critical before AI in warfare becomes so routine that nobody thinks to ask anymore. Otherwise we may find ourselves waiting for a catastrophic mistake, and imposing transparency only after the damage is done.

____

This column reflects the personal views of the author and does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author of “Supremacy: AI, ChatGPT and the Race That Will Change the World.”


©2026 Bloomberg L.P. Visit bloomberg.com/opinion. Distributed by Tribune Content Agency, LLC.

 

Comments

blog comments powered by Disqus

 

Related Channels

The ACLU

ACLU

By The ACLU
Amy Goodman

Amy Goodman

By Amy Goodman
Armstrong Williams

Armstrong Williams

By Armstrong Williams
Austin Bay

Austin Bay

By Austin Bay
Ben Shapiro

Ben Shapiro

By Ben Shapiro
Betsy McCaughey

Betsy McCaughey

By Betsy McCaughey
Bill Press

Bill Press

By Bill Press
Bonnie Jean Feldkamp

Bonnie Jean Feldkamp

By Bonnie Jean Feldkamp
Cal Thomas

Cal Thomas

By Cal Thomas
Clarence Page

Clarence Page

By Clarence Page
Danny Tyree

Danny Tyree

By Danny Tyree
David Harsanyi

David Harsanyi

By David Harsanyi
Debra Saunders

Debra Saunders

By Debra Saunders
Dennis Prager

Dennis Prager

By Dennis Prager
Dick Polman

Dick Polman

By Dick Polman
Erick Erickson

Erick Erickson

By Erick Erickson
Froma Harrop

Froma Harrop

By Froma Harrop
Jacob Sullum

Jacob Sullum

By Jacob Sullum
Jamie Stiehm

Jamie Stiehm

By Jamie Stiehm
Jeff Robbins

Jeff Robbins

By Jeff Robbins
Jessica Johnson

Jessica Johnson

By Jessica Johnson
Jim Hightower

Jim Hightower

By Jim Hightower
Joe Conason

Joe Conason

By Joe Conason
John Stossel

John Stossel

By John Stossel
Josh Hammer

Josh Hammer

By Josh Hammer
Judge Andrew P. Napolitano

Judge Andrew Napolitano

By Judge Andrew P. Napolitano
Laura Hollis

Laura Hollis

By Laura Hollis
Marc Munroe Dion

Marc Munroe Dion

By Marc Munroe Dion
Michael Barone

Michael Barone

By Michael Barone
Mona Charen

Mona Charen

By Mona Charen
Rachel Marsden

Rachel Marsden

By Rachel Marsden
Rich Lowry

Rich Lowry

By Rich Lowry
Robert B. Reich

Robert B. Reich

By Robert B. Reich
Ruben Navarrett Jr.

Ruben Navarrett Jr

By Ruben Navarrett Jr.
Ruth Marcus

Ruth Marcus

By Ruth Marcus
S.E. Cupp

S.E. Cupp

By S.E. Cupp
Salena Zito

Salena Zito

By Salena Zito
Star Parker

Star Parker

By Star Parker
Stephen Moore

Stephen Moore

By Stephen Moore
Susan Estrich

Susan Estrich

By Susan Estrich
Ted Rall

Ted Rall

By Ted Rall
Terence P. Jeffrey

Terence P. Jeffrey

By Terence P. Jeffrey
Tim Graham

Tim Graham

By Tim Graham
Tom Purcell

Tom Purcell

By Tom Purcell
Veronique de Rugy

Veronique de Rugy

By Veronique de Rugy
Victor Joecks

Victor Joecks

By Victor Joecks
Wayne Allyn Root

Wayne Allyn Root

By Wayne Allyn Root

Comics

Bob Englehart Mike Luckovich Mike Smith Phil Hands Dana Summers Gary Varvel