Pentagon’s chief tech officer says he clashed with AI company Anthropic over autonomous warfare
A top Pentagon official said Anthropic’s dispute with the government over the use of its artificial intelligence technology in fully autonomous weapons came after a debate over how AI could be used in President Donald Trump’s future Golden Dome missile defense program, which aims to put U.S. weapons in space.
U.S. Defense Undersecretary Emil Michael, the Pentagon’s chief technology officer, said he came to view the AI company’s ethical restrictions on the use of its chatbot Claude as an irrational obstacle as the U.S. military pursues giving greater autonomy to swarms of armed drones, underwater vehicles and other machines to compete with rivals like China that could do the same.
βI need a reliable, steady partner that gives me something, thatβll work with me on autonomous, because someday itβll be real and weβre starting to see earlier versions of that,” Michael said in a podcast aired Friday. “I need someone whoβs not going to wig out in the middle.β
The comments came after the Pentagon formally designated San Francisco-based Anthropic a supply chain risk, cutting off its defense work using a rule designed to prevent foreign adversaries from harming national security systems.
Anthropic has vowed to sue over the designation, which affects its business partnerships with other military contractors.
Trump has also ordered federal agencies to immediately stop using Claude, though the Republican president gave the Pentagon six months to phase out a product thatβs deeply embedded in classified military systems, including those used in the Iran war.
Anthropic said it only sought to restrict its technology from being used for two high-level usages: mass surveillance of Americans or fully autonomous weapons.
Michael, a former Uber executive, revealed his side of months-long talks with Anthropic CEO Dario Amodei in a lengthy conversation with Silicon Valley venture capitalists Jason Calacanis, David Friedberg and Chamath Palihapitiya, co-hosts of the βAll-In” podcast.
A fourth co-host, former PayPal executive David Sacks, is now Trump’s AI czar and was not present for the episode but has been a vocal critic of Anthropic, including for its hiring of former Biden administration officials shortly after Trump returned to the White House last year.
As talks hit an impasse last week, Michael lashed out at Amodei on social media, saying he βhas a God-complexβ and βwants nothing more than to try to personally control” the military. In the podcast, however, he positioned the dispute as part of a broader military shift toward using AI.
Michael said the military is developing procedures for enabling different levels of autonomy in warfare depending on the risk posed.
βThis is part of the debate I had with Anthropic, which is we need AI for things like Golden Dome,β Michael said, sharing a hypothetical scenario of the U.S. having only 90 seconds to respond to a Chinese hypersonic missile.
A human anti-missile operator βmay not be able to discriminate with their own eyes what theyβre going after,β but an autonomous counterattack would be a low risk βbecause itβs in space and youβre just trying to hit something thatβs trying to get you.β
In another scenario, he said, βwho could oppose if you have a military base, you have a bunch of soldiers sleeping, that you have a laser that can take down drones autonomously?β
In response to the podcast comments, Anthropic pointed to an earlier Amodei statement saying βAnthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.β
Michael, the defense undersecretary for research and engineering, was sworn in last May and said he took over the military’s βAI portfolioβ in August. That’s when he said he began scrutinizing Anthropic’s contracts β some of which dated from President Joe Biden’s Democratic administration. Michael said he questioned Anthropic over terms of use that he deemed too restrictive.
βI need to have the terms of service be rational relative to our mission set,β he said. βSo we started these negotiations. It took three months and I had to sort of give them scenarios, like this Chinese hypersonic missile example. Theyβre like, βOK, weβll give you an exception for that.β Well, how about this drone swarm? βWeβll give an exception for that.β And I was like, exceptions doesnβt work. I canβt predict for the next 20 years what (are) all the things we might use AI for.β
That’s when the Pentagon began insisting Anthropic and other AI companies allow for βall lawful useβ of their technology, Michael said.
Anthropic resisted that change, arguing that today’s leading AI systems βare simply not reliable enough to power fully autonomous weapons.”
Its competitors β Google, OpenAI and Elon Musk’s xAI β agreed to the Pentagon’s terms, though some still have to get their infrastructure prepared for classified military work, Michael said. The other sticking point for Anthropic was not allowing any mass surveillance of Americans.
βThey didnβt want us to bulk-collect public information on people using their AI system,β Michael said, describing the negotiations as βinterminable.β
Anthropic has disputed parts of Michael’s version of the talks and emphasized that the protections it sought were narrow and not based on existing uses of Claude. The next stage of the dispute will likely happen in court.
Copyright 2026 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.