OpenAI CEO Sam Altman announced late Friday that the company has reached an agreement that will allow the company to use its AI models within the Department of Defense’s classified networks.
This follows a high-profile conflict between the department, also known as the Department of the Army under the Trump administration, and OpenAI rival Anthropic. The Pentagon has pressed AI companies, including Anthropic, to allow their models to be used for “any lawful purpose,” but Anthropic has tried to draw the line on domestic mass surveillance and fully autonomous weapons.
In a lengthy statement released Thursday, Anthropic CEO Dario Amodei said the company has “never objected to any specific military operation, nor have we ever sought to limit the use of our technology in an ad hoc manner,” but insisted that “in limited cases, we believe that AI could undermine rather than protect democratic values.”
This week, more than 60 OpenAI employees and 300 Google employees signed an open letter calling on their employers to support Anthropic’s position.
After Anthropic and the Department of Defense failed to reach an agreement, President Donald Trump criticized “Anthropic’s left-wing crazy work” in a social media post and directed federal agencies to stop using the company’s products after a six-month phase-out period.
In another post, Secretary of Defense Pete Hegseth claimed that Anthropic was trying to “seize veto power over U.S. military operational decisions.” Hegseth also said he was designating Anthropic as a supply chain risk, saying, “Effective immediately, contractors, suppliers, and partners who do business with the U.S. military may not engage in any commercial activity with Anthropic.”
Anthropic said Friday that it had “not yet received any direct communication from the Department of the Army or the White House regarding the status of negotiations,” but insisted it “will challenge the supply chain risk designation in court.”
tech crunch event
boston, massachusetts
|
June 9, 2026
Surprisingly, Altman claimed in his post to X that OpenAI’s new defense contract includes protections that address the same issues that became a flashpoint for Anthropic.
“Two of our most important security principles are the prohibition of domestic mass surveillance and human responsibility for the use of force, including autonomous weapons systems,” Altman said. “The DoW agrees with these principles and reflects them in our laws and policies, and we have incorporated them into our agreement.”
Altman said OpenAI “will build technical safeguards to ensure our models work as expected, which is what the DoW also wanted,” and will send engineers to the Department of Defense “to support our models and ensure their safety.”
“We are asking the DoW to provide similar terms to all AI companies, which, in our opinion, everyone should be willing to accept,” Altman added. “We have expressed a strong desire to de-escalate the situation from legal and government action and toward a reasonable agreement.”
Fortune’s Sharon Goldman reports that Altman told OpenAI employees during an all-hands meeting that the government would allow the company to build its own “safety stack” to prevent abuse, and that “if a model refuses to perform a task, the government will not force OpenAI to do that task.”
Altman’s post came shortly before news broke that President Trump had called for the overthrow of the Iranian government and that the U.S. and Israeli governments had begun bombing Iran.
