Hegseth and Anthropic CEO set to meet as debate intensifies over the military’s use of AI

Hegseth and Anthropic CEO set to meet as debate intensifies over the military’s use of AI



WASHINGTON โ€“ Defense Secretary Pete Hegseth plans to meet Tuesday with the CEO of Anthropic, with the artificial intelligence company the only one of its peers to not supply its technology to a new U.S. military internal network.

Anthropic, maker of the chatbot Claude, declined to comment on the meeting but CEO Dario Amodei has made clear his ethical concerns about unchecked government use of AI, including the dangers of fully autonomous armed drones and of AI-assisted mass surveillance that could track dissent.

The meeting between Hegseth and Amodei was confirmed by a defense official who was not authorized to comment publicly and spoke on condition of anonymity.

It underscores the debate over AI’s role in national security and concerns about how the technology could be used in high-stakes situations involving lethal force, sensitive information or government surveillance. It also comes as Hegseth has vowed to root out what he calls a โ€œwoke cultureโ€ in the armed forces.

โ€œA powerful AI looking across billions of conversations from millions of people could gauge public sentiment, detect pockets of disloyalty forming, and stamp them out before they grow,โ€ Amodei wrote in an essay last month.

Anthropic is the only AI company approved for classified military networks

The Pentagon announced last summer that it was awarding defense contracts to four AI companies โ€” Anthropic, Google, OpenAI and Elon Muskโ€™s xAI. Each contract is worth up to $200 million.

Anthropic was the first AI company to get approved for classified military networks, where it works with partners like Palantir. The other three companies, for now, are only operating in unclassified environments.

By early this year, Hegseth was highlighting only two of them: xAI and Google.

The defense secretary said in a January speech at Muskโ€™s space flight company, SpaceX, in South Texas that he was shrugging off any AI models โ€œthat wonโ€™t allow you to fight wars.โ€

Hegseth said his vision for military AI systems means that they operate โ€œwithout ideological constraints that limit lawful military applications,โ€ before adding that the Pentagonโ€™s โ€œAI will not be woke.โ€

In January, Hegseth said Muskโ€™s artificial intelligence chatbot Grok would join the Pentagon network, called GenAI.mil. The announcement came days after Grok โ€” which is embedded into X, the social media network owned by Musk โ€” drew global scrutiny for generating highly sexualized deepfake images of people without their consent.

OpenAI announced in early February that it, too, would join the military’s secure AI platform, enabling service members to use a custom version of ChatGPT for unclassified tasks.

Anthropic calls itself more safety-minded

Anthropic has long pitched itself as the more responsible and safety-minded of the leading AI companies, ever since its founders quit OpenAI to form the startup in 2021.

The uncertainty with the Pentagon is putting those intentions to the test, according to Owen Daniels, associate director of analysis and fellow at Georgetown Universityโ€™s Center for Security and Emerging Technology.

โ€œAnthropicโ€™s peers, including Meta, Google and xAI, have been willing to comply with the departmentโ€™s policy on using models for all lawful applications,โ€ Owens said. โ€œSo the companyโ€™s bargaining power here is limited, and it risks losing influence in the departmentโ€™s push to adopt AI.โ€

In the AI craze that followed the release of ChatGPT, Anthropic closely aligned with President Joe Bidenโ€™s administration in volunteering to subject its AI systems to third-party scrutiny to guard against national security risks.

Amodei, the CEO, has warned of AIโ€™s potentially catastrophic dangers while rejecting the label that heโ€™s an AI โ€œdoomer.โ€ He argued in the January essay that โ€œwe are considerably closer to real danger in 2026 than we were in 2023โ€ณ but that those risks should be managed in a โ€œrealistic, pragmatic manner.โ€

Anthropic has been at odds with the Trump administration

This would not be the first time Anthropicโ€™s advocacy for stricter AI safeguards has put it at odds with the Trump administration. Anthropic needled chipmaker Nvidia publicly, criticizing Trumpโ€™s proposals to loosen export controls to enable some AI computer chips to be sold in China. The AI company, however, remains a close partner with Nvidia.

The Trump administration and Anthropic also have been on opposite sides of a lobbying push to regulate AI in U.S. states.

Trumpโ€™s top AI adviser, David Sacks, accused Anthropic in October of โ€œrunning a sophisticated regulatory capture strategy based on fear-mongering.โ€

Sacks made the remarks on X in response to an Anthropic co-founder, Jack Clark, writing about his attempt to balance technological optimism with โ€œappropriate fearโ€ about the steady march toward more capable AI systems.

Anthropic hired a number of ex-Biden officials soon after Trumpโ€™s return to the White House, but itโ€™s also tried to signal a bipartisan approach. The company recently added Chris Liddell, a former White House official from Trumpโ€™s first term, to its board of directors.

The Pentagon-Anthropic debate is reminiscent of an uproar several years ago when some tech workers objected to their companiesโ€™ participation in Project Maven, a Pentagon drone surveillance program. While some workers quit over the project and Google itself dropped out, the Pentagonโ€™s reliance on drone surveillance has only increased.

Similarly, โ€œthe use of AI in military contexts is already a reality and it is not going away,โ€ Owens said.

โ€œSome contexts are lower stakes, including for back-office work, but battlefield deployments of AI entail different, higher-stakes risks,โ€ he said, referring to the use of lethal force or weapons like nuclear arms. โ€œMilitary users are aware of these risks and have been thinking about mitigation for almost a decade.โ€

___

O’Brien reported from Providence, Rhode Island.

Copyright 2026 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.

Leave a Reply

Your email address will not be published. Required fields are marked *