According to multiple reports, Google has given the U.S. Department of Defense access to its AI for sensitive networks, allowing virtually all lawful uses.
The deal follows Anthropic’s public stance against the Trump administration after the model maker refused to give the same terms to the Pentagon. The Pentagon wanted unrestricted use of AI, but Anthropic wanted guardrails to prevent its AI from being used for domestic mass surveillance or autonomous weapons.
After Anthropic rejected these use cases, the Department of Defense branded the model maker a “supply chain risk.” This is a designation typically given to foreign adversaries. Anthropic and the Department of Defense are currently embroiled in litigation, with a judge last month granting Anthropic an injunction against the designation pending litigation.
Google becomes the third AI company to try to turn Anthropic’s losses into its own. OpenAI, like xAI, quickly signed a contract with the Department of Defense. The Wall Street Journal reported that Google’s contract includes language that says it does not intend to use its AI for domestic mass surveillance or autonomous weapons, similar to language in its contract with OpenAI. However, it is unclear whether such clauses are legally binding or enforceable, according to the WSJ.
Google made the deal even though 950 of its employees signed an open letter urging it to follow Anthropic’s lead and not sell AI to the Department of Defense without similar guardrails. Google did not respond to a request for comment.
If you buy through links in our articles, we may earn a small commission. This does not affect editorial independence.
