The Department of Defense (DOD) has formally notified Anthropic leadership that the company and its products have been designated as supply chain risks, Bloomberg reported, citing a senior Department of Defense (DOD) official.
The designation comes after weeks of conflict between the AI Institute and the Department of Defense. Anthropic CEO Dario Amodei has refused to allow the military to use his AI systems for mass surveillance of American citizens or to power fully autonomous weapons without human assistance in determining targets and firing shots. The ministry maintains that the use of AI should not be restricted by private contractors.
Supply chain risk designations are typically reserved for foreign adversaries. The label is asking companies and agencies working with the Department of Defense to certify that they are not using Anthropic models.
The Pentagon’s findings threaten to disrupt both the company and its own operations. Anthropic is the only frontier AI lab with classified response systems. The U.S. military currently relies on Claude for operations in Iran, and the U.S. military uses AI tools to rapidly manage operational data. According to Bloomberg, Claude is one of the key tools installed in Palantir’s Maven smart system, which military operators in the Middle East rely on.
Some critics say labeling Anthropic a supply chain risk because of this disagreement is an unprecedented move by the department. Dean Ball, who served as President Trump’s AI adviser, called the designation the “death rattle” of the American republic, arguing that the government is abandoning strategic clarity and respect in favor of a “vicious” tribalism that treats domestic innovators worse than foreign adversaries.
Hundreds of OpenAI and Google employees urged the Department of Defense to revoke that designation and called on Congress to rescind what could be seen as an improper exercise of power over U.S. technology companies. They also called on national leaders to unite to continue rejecting the Pentagon’s demands to use its AI models for domestic mass surveillance and “autonomously killing people without human oversight.”
TechCrunch has reached out to Anthropic for comment.
tech crunch event
San Francisco, California
|
October 13-15, 2026
Amid the conflict, OpenAI signed its own agreement with the ministry that allows the military to use its AI systems for “all lawful purposes.” Some of the company’s employees have expressed concerns about vague language in the contract, which could lead to exactly the kind of use Anthropic was trying to avoid.
Amodei called the Pentagon’s actions “retaliatory and punitive,” and his refusal to praise or donate to President Trump reportedly contributed to the conflict with the Pentagon. OpenAI President Greg Brockman is an avid Trump supporter and recently donated $25 million to the MAGA Inc. super PAC.
