File photo: The Pentagon seen from above in Washington, USA on March 3, 2022.
Joshua Roberts | Reuters
Anthropic has been at odds with the Pentagon over how it uses artificial intelligence models, and its work with the agency is “under consideration,” a Pentagon spokesperson told CNBC.
The five-year-old startup won a $200 million contract with the Department of Defense last year. As of February, Anthropic is the only AI company to deploy its models to sensitive government networks and provide customized models to national security customers.
But negotiations over “future” terms of use have stalled, Emile Michael, the undersecretary of defense for research and engineering, said at a summit in Florida on Tuesday.
According to the Axios report, Anthropic is seeking assurances that its models will not be used for autonomous weapons or “collective spying on American citizens.”
In contrast, the Pentagon wants to use Anthropic’s model without restriction “for all legitimate use cases.”
“If one company doesn’t want to respond, that’s a problem for us,” Michael said. “That could create a dynamic where we start using those models and get used to how those models work. And then when we need to use it in an emergency situation, we may not be able to use it.”
This is the latest wrinkle in Anthropic’s increasingly fraught relationship with the Trump administration, which has publicly criticized the company in recent months.
David Sachs, the venture capitalist who serves as the administration’s AI and cryptocurrency czar, accused Anthropic of supporting “woke AI” because of its stance on regulation.
An Anthropic spokesperson said the company is having “good faith and productive conversations” with the Department of Defense about how to “resolve these complex issues correctly.”
“Anthropic is committed to leveraging frontier AI in support of U.S. national security,” a spokesperson said.
This startup’s rival OpenAI, google and xAI were also awarded a contract award of up to $200 million by the Department of Defense last year.
The companies have agreed to let the Pentagon use their models for any lawful purpose within the military’s unclassified systems, with one company agreeing for “all systems,” said a senior Pentagon official who requested anonymity because the negotiations are confidential.
If Anthropic ultimately does not agree to the Department of Defense’s terms of service, the agency could label the company a “supply chain risk,” which would require vendors and contractors to certify that they are not using Anthropic’s model, the person said.
This designation is typically reserved for foreign adversaries, so it would be a compounding blow for Anthropic.
The company was founded in 2021 by a group of former OpenAI researchers and executives and is best known for developing a family of AI models called Claude.
Anthropic announced earlier this month that it had closed a $30 billion funding round at a valuation of $380 billion, more than double its previous raise in September.
NOW WATCH: Anthropic debuts Sonnet 4.6 model
