Secretary of War Pete Hegseth said in a post on X that he is directing the Department of War, formerly known as the Department of Defense, to designate American AI firm Anthropic a “Supply-Chain Risk to National Security.”

In the same post, he said the designation would take effect immediately and that “no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic,” while allowing up to six months for a transition away from Anthropic’s services.

The designation and what triggered it

In a statement published on Feb. 27 in response to the action, Anthropic said that it made clear to the Department of War that it supported “lawful uses” of AI for national security aside from two exceptions: mass domestic surveillance of Americans and fully autonomous weapons.

However, Anthropic said the Department stated it would only contract with AI companies that accept “any lawful use” and remove safeguards for those cases.

What Amodei said the Department threatened

In a Feb. 26 statement, Anthropic CEO Dario Amodei said the Department threatened to remove Anthropic from government systems if the safeguards remained.

He said the Department also threatened to designate Anthropic a “supply chain risk,” which he characterized as a label “historically reserved for US adversaries” and never before publicly applied to an American company, and to invoke the Defense Production Act to force the safeguards’ removal.

Amodei said Claude has been deployed for classified use cases including intelligence analysis, operational planning, modeling and simulation and cyber operations.

He also said Anthropic was the first frontier AI lab to deploy models in US government classified networks and at national laboratories and the first to provide custom models to national security customers.

Anthropic said it did not seek to object to specific operations and said it has never tried to limit military actions on an ad hoc basis. It said it offered to keep its models available on the “expansive terms” it proposed for as long as needed to avoid disrupting ongoing missions during any transition.

What the legal record says about the designation

A client alert from Willkie Farr & Gallagher said designation of an American company as a supply chain risk under 10 U.S.C. § 3252 appears unprecedented and noted that the provision has previously been used to target foreign adversaries such as Russia’s Kaspersky and China’s Huawei.

The alert said Section 3252 is typically implemented through findings by contracting and information security officials and said it is unclear whether the Defense Department followed that process here.

Willkie said the scope and immediate effect of Hegseth’s order are disputed, but said the incident highlights risks of more aggressive actions that can affect federal contractors through procurement controls and exclusion decisions tied to designated technologies.

Terminology note: Anthropic and Hegseth use “Department of War” and “Secretary of War” while Willkie’s Client Alert uses “Department of Defense,” “Secretary of Defense” and “the Pentagon” to refer to the same department, which was renamed by executive action in September 2025.

How OpenAI structured its agreement differently

OpenAI announced a Pentagon agreement the next day and has positioned its terms as narrower than “any lawful use.” OpenAI updated its post March 2 to add language stating its system “shall not be intentionally used for domestic surveillance of U.S. persons and nationals,” including through the use of commercially acquired personal or identifiable information.

OpenAI said the deployment is cloud-only rather than on edge devices and said it retains control of its safety stack, including classifiers it can update to verify the contract’s restrictions are not crossed.

It also published contract excerpts describing use for “all lawful purposes” consistent with applicable law, operational requirements and safety and oversight protocols while restricting the system from independently directing autonomous weapons systems where law, regulation or Department policy requires human control.

In describing how autonomous weapons constraints map to existing Department policy, OpenAI cited DoD Directive 3000.09, which establishes policy for autonomous and semi-autonomous functions in weapon systems and sets guidelines designed to minimize the probability and consequences of failures that could lead to unintended engagements.

Personalized Feed
Personalized Feed