
Sam Altman, CEO of OpenAI Inc., attends the AI Impact Summit on Thursday, February 19, 2026 in New Delhi, India.
Prakash Singh | Bloomberg | Getty Images
OpenAI CEO Sam Altman told staff late Thursday that he would like to “try to de-escalate the situation” between rival Anthropic and the Pentagon.
“I have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should always be on top of high-stakes automated decisions,” Altman said in a memo seen by CNBC. “These are our main red lines.”
Anthropic has until Friday at 5:01 p.m. ET to decide whether to agree to grant the Department of Defense permission to use its artificial intelligence models in all legitimate use cases without restriction. The company has sought assurances that its technology will not be used for fully autonomous weapons or for mass surveillance of American citizens domestically, but the Pentagon has not changed its stance.
Altman’s internal letter on Thursday was aimed at showing that OpenAI shares boundaries with Anthropic. The Wall Street Journal first reported the memo.

Prior to Altman’s memo, OpenAI employees had begun speaking out in support of Anthropic on social media. About 70 current employees signed an open letter titled “We Will Not Divide,” which aims to create “common understanding and unity in the face of this pressure” from the department, according to the department’s website.
“Despite my differences with Anthropic, I pretty much trust them as a company. I think they really care about safety and I’m happy that they’re supporting our warfighters,” Altman said in an interview with CNBC on Friday. “I don’t know what will happen next.”
OpenAI won a $200 million contract from the Department of Defense last year, allowing the agency to begin using the startup’s model in unclassified use cases. Anthropic is the first AI lab to integrate its models into mission workflows on classified networks.
Altman said he will see if OpenAI can strike an agreement with the Department of Defense to deploy models in classified environments in a manner “consistent with our principles.” He said the company will build technical safety measures and staff to “ensure things work properly.”
“We want contracts to cover all uses except those that are illegal or unsuitable for cloud deployment, such as domestic surveillance or autonomous assault weapons,” Altman said.
Altman said OpenAI has been meeting on the topic in recent days, but has not yet reached a decision on what to do. He said there will be further meetings with OpenAI’s safety team on Friday.
“This, to me, is a case where doing the right thing is important, not a simple matter of appearing strong but being dishonest,” Altman wrote. “However, we understand that it may not be ‘good’ for us in the short term and there are many nuances and contexts. ”
—CNBC’s Kate Rooney contributed to this report.
Spotlight: OpenAI closes $110 billion funding round with backing from Amazon, Nvidia and Softbank

