Sam Altman, CEO of OpenAI Inc., attends the AI Impact Summit on Thursday, February 19, 2026 in New Delhi, India.
Prakash Singh | Bloomberg | Getty Images
OpenAI CEO Sam Altman announced late Friday that the company had agreed terms with the Department of Defense regarding the use of artificial intelligence models, shortly after President Donald Trump said the government would not work with AI rival Anthropic.
“Tonight, we reached an agreement with the Department of the Army to deploy our models on classified networks,” Altman said in a post on X. “In all of our interactions, the DoW has demonstrated a deep respect for security and a desire to partner to achieve the best possible outcomes.”
Altman’s post arrives at the end of a dramatic week for the AI industry. The AI industry is at the center of political debate over how its models are used. Earlier in the day, Defense Secretary Pete Hegseth designated Anthropic as a “supply chain risk to national security” after weeks of tense negotiations. This label is typically reserved for foreign adversaries and requires Department of Defense vendors and contractors to certify that they are not using Anthropic’s model.
President Trump also directed all federal agencies in the United States to “immediately cease” using Anthropic’s technology.
Anthropic was the first lab to deploy the model across the Pentagon’s classified network and was trying to negotiate the terms of a continuing contract with the Pentagon before negotiations broke down. The company wanted assurances that its model would not be used for fully autonomous weapons or mass surveillance of American citizens, while the Pentagon wanted Anthropic to agree to the military’s use of the model in all lawful use cases.

Altman told employees in a memo Thursday that OpenAI shares the same “red lines” as Anthropic. He said in a post Friday that the Pentagon agreed to the restrictions.
“Two of our most important security principles are the prohibition of domestic mass surveillance and human responsibility for the use of force, including autonomous weapons systems,” Altman wrote. DoW subscribes to these principles and reflects them in our laws and policies, and we incorporate them into our contracts.
It was not immediately clear why the Pentagon agreed to work with OpenAI instead of Anthropic, but government officials have criticized Anthropic in recent months for being overly concerned about the safety of its AI.
Altman said OpenAI will build “technical safeguards to ensure the models behave as expected” and will have personnel in place to “assist the models and ensure they are secure.”
“We are asking the DoW to offer similar terms to all AI companies, which, in our opinion, everyone should be willing to accept,” Altman wrote. “We have expressed a strong desire to see the situation de-escalate from legal and government action and move toward a reasonable agreement.”
Anthropic said in a statement Friday that it is “deeply saddened” by the Department of Defense’s decision to designate the company as a supply chain risk. He said he plans to challenge the designation in court.
WATCH: Hegseth directs Pentagon to designate Anthropic as a supply chain risk

