Microsoft CEO Satya Nadella attends the 56th World Economic Forum (WEF) Annual Meeting in Davos, Switzerland, January 20, 2026.
Dennis Bariboos | Reuters
Microsoft sided with Anthropic on Tuesday, saying a judge should issue a restraining order blocking the Pentagon’s designation of the artificial intelligence giant as a supply chain risk “on all existing contracts.”
Such an order “allows for a more orderly transition and avoids disrupting the continued use of advanced AI by the U.S. military,” Microsoft said in a filing in U.S. District Court in San Francisco. Without the order, Microsoft, along with other technology companies, warned that it would have to “act immediately to modify existing products and contract structures” used by the Department of Defense.
“This could potentially hinder U.S. warfighters at critical points,” the filing states.
Last week, the Department of Defense formally banned Anthropic’s technology, labeling the company a supply chain risk. This has historically been a label reserved for foreign adversaries. The designation is effective immediately and requires defense vendors and contractors to certify that they are not using Anthropic models in their work with the Department of Defense.
Anthropic sued the Trump administration on Monday, claiming the government’s actions were “unprecedented and illegal” and “causing irreparable harm to Anthropic,” putting hundreds of millions of dollars worth of contracts at risk in the short term.
Microsoft’s comments Tuesday were included in a draft amicus brief filed with the court. Amicus briefs are filed by parties who are not named in a particular case but have relevant expertise or are affected by the outcome.
Microsoft announced plans in November to invest up to $5 billion in Anthropic. The company has also been a major investor in rival OpenAI since 2019.
Anthropic had been renegotiating its contract with the Department of Defense in recent weeks, but negotiations between the two organizations broke down after they could not agree on how to use its model, known as Claude.
Anthropic sought assurances that its models would not be used for fully autonomous weapons or domestic mass surveillance, but the Pentagon wanted the company to give the military unfettered access for all lawful purposes. Neither party moved.
Following the Pentagon’s ban announcement, Microsoft and its biggest cloud rivals Amazon and Google both issued updates to customers, informing them that non-defense Anthropic products will continue to be accessible on their cloud platforms.
Microsoft said in a filing Tuesday that the temporary restraining order will allow Anthropic and the Department of Defense to pursue “a negotiated solution that better serves all parties involved and avoids broader business impacts.”
“We believe all parties involved share a common goal, and finding common ground takes time and process,” a Microsoft spokesperson said in a statement. “The Department of the Army needs reliable access to the nation’s best technology, and everyone wants to ensure that AI is not used to start wars without domestic mass surveillance or human control.”
Founded in 2021 by a group of former OpenAI executives, Anthropic has grown to become one of the fastest-growing technology startups in the US, valued at $380 billion.
–CNBC’s Dan Mangan and Laura Kolodny contributed to this report
WATCH: Why the Pentagon’s human blacklist is so unprecedented

