This illustration taken on March 1, 2026 shows the U.S. Department of the Army and Humanity logos.
Dado Ruvik | Reuters
technical workers googleIn the wake of last weekend’s U.S. attack on Iran and the Pentagon’s blacklisting of Anthropic’s AI models, OpenAI and some of its peers are circulating a series of letters calling for clearer restrictions on how employers can work with the military.
One open letter, titled “We Will Not Be Divided,” went from a few hundred names on Friday to nearly 900 on Monday, with nearly 100 signatories from OpenAI and nearly 800 from Google. The letter focused on the Pentagon’s actions against Anthropic, which refused to allow its technology to be used in mass surveillance or fully autonomous weapons.
“They are trying to divide each company out of fear that the other will give in,” the letter said. “That strategy only works if none of us know the other’s position. This letter helps create common understanding and unity in the face of this pressure from the Department of the Army.”
Combat operations began in Iran on Friday, hours after the Trump administration decided to block Anthropic and designate the company as a “supply chain risk.” The US government claimed the attack on Iran was necessary to neutralize the “immediate threat” posed by Iran’s nuclear and missile programs, but the move appears to have encouraged more tech workers to sign various petitions.
Tensions in the tech industry have been rising in recent months, largely due to increased aggression by federal immigration agents, including the killing of two Americans in Minnesota earlier this year. Workers in the industry are demanding greater transparency in the work their employers do with the government, particularly in cloud and artificial intelligence contracts.
For Google, the backlash comes as the company is reportedly in talks with the Pentagon to deploy its AI model, Gemini, on classified systems, reigniting a long-standing internal battle over military AI.

No Tech for Apartheid, a group that has long criticized cloud deals between the U.S. government and big tech companies, released a joint statement Friday titled “Amazon, Google and Microsoft must reject the Department of Defense’s demands.”
The coalition said the three cloud infrastructure leaders should reject the Pentagon’s terms that would enable mass surveillance and other AI abuses, and called for more clarity on contracts involving the military and agencies such as the Department of Homeland Security and Immigration and Customs Enforcement (ICE).
The group directly pointed to Google and raised the possibility of a Pentagon agreement that could mirror the one that allows the Pentagon to deploy Elon Musk’s xAI Grok “in sensitive environments without any guardrails to our knowledge.”
“Our own company is on the brink of accepting similar terms,” the statement said. “Google is in talks with the Department of Defense to deploy its own Frontier model, Gemini, for classified applications.”
While Anthropic and OpenAI have made numerous public statements regarding the status of negotiations and contracts with the Department of Defense, Google’s parent company Alphabet has remained silent. The company did not respond to multiple requests for comment.
“Supply chain risk”
In another effort to support Anthropic, hundreds of tech workers signed an open letter asking the Department of Defense to rescind its designation of the company as a “supply chain risk.” The list includes dozens of OpenAI employees, including employees at companies such as: sales forcedata brick, IBM and cursor
The letter asks Congress to “consider whether the use of these special powers against U.S. technology companies is appropriate,” and says Anthropic and other private companies should not be subject to retaliation for refusing to comply with government requests.
Similar concerns emerged within Google last week, with more than 100 employees involved in AI technology reportedly signing a letter to management expressing concerns about the company’s collaboration with the Department of Defense. According to the New York Times, they called on the search giant to draw the same red line as Anthropic.
Google’s chief scientist, Jeff Dean, received the memo and seemed to share at least some of the concerns. “Mass surveillance violates the Fourth Amendment and has a chilling effect on free expression,” he wrote in the X thread.
He added that the surveillance system is “vulnerable to abuse for political or discriminatory purposes.”
Dean recently experienced a related issue with Google.
Jeff Dean, head of artificial intelligence at Google LLC, spoke at the Google AI event on Tuesday, January 28, 2020, in San Francisco, California, USA.
David Paul Morris | Bloomberg | Getty Images
In 2018, the company faced an internal revolt over Project Maven, a Pentagon program that uses AI to analyze drone footage. After thousands of employees protested, Google let the contract lapse. The company later established “AI Principles” that govern how its technology can be used.
It continues to amaze people. In 2024, Google laid off more than 50 employees following protests over Project Nimbus, a $1.2 billion joint deal with Amazon to work with the Israeli government. Executives reiterated that the deal does not violate the company’s AI principles. But the company’s agreement allows it to provide AI tools to Israel, including image classification, object tracking and provisions for state-owned weapons manufacturers, according to documents and reports.
In December of that year, the New York Times reported that four months before the Nimbus deal, company officials were concerned that signing the deal would damage its reputation and that “Google Cloud services could be used to facilitate or be associated with human rights abuses.”
Early last year, Google reportedly revised its AI principles to remove language that explicitly prohibited “weapons production” and “surveillance technology.”
Remember: Humanity, the Department of Defense, and software sales are not separate stories.

