The deal comes days after the Pentagon announced agreements with seven tech giants to use AI in classified systems.
Published May 5, 2026
Tech giants Microsoft, Google and xAI have announced that the US federal government will grant them access to their new artificial intelligence models for national security testing.
The Commerce Department’s Center for AI Standards and Innovation (CAISI) announced the agreement on Tuesday amid growing concerns about the potential that Anthropic’s newly released Mythos model could pose to hackers.
Recommended stories
list of 4 itemsend of list
Under the new agreement, the U.S. government will be allowed to conduct research to evaluate the model and assess its capabilities and security risks before deployment.
The agreement fulfills a pledge made in July by President Donald Trump’s administration to partner with technology companies to scrutinize AI models for “national security risks.”
Microsoft will work with U.S. government scientists to test its AI systems “in ways that explore unexpected behavior,” the company said in a statement. The companies will work together to develop shared datasets and workflows to test the company’s models, the company said.
Microsoft has signed a similar agreement with the UK’s Institute for AI Security, according to a statement.
There is growing concern in Washington about the national security risks posed by powerful AI systems. By securing early access to the frontier model, U.S. officials aim to identify threats ranging from cyberattacks to military exploitation before the tools are widely deployed.
Developments in recent weeks of advanced AI systems, including Anthropic’s Mythos, have caused a stir around the world, including among U.S. government officials and American companies, over their ability to powerfully attack hackers.
“Independent, rigorous measurement science is essential to understanding frontier AI and its national security implications,” CAISI Director Chris Fall said in a statement.
The move builds on a 2024 agreement with OpenAI and Anthropic under President Joe Biden’s administration, when CAISI was known as the American Institute for Artificial Intelligence and Safety. Under the Biden administration, the institute focused on testing, defining, and developing voluntary safety standards for AI. It is being led by Biden’s technology adviser Elizabeth Kelly, who has since joined Anthropic, according to her LinkedIn profile.
CAISI, which serves as the government’s primary hub for AI model testing, said it has already completed more than 40 evaluations, including cutting-edge models that have not yet been released to the public.
The agency said developers often hand over versions of models with safety guardrails removed so the center can examine national security risks.
xAI did not immediately respond to a request for comment. Google declined to comment.
On Wall Street, Microsoft’s stock price fell 0.6% in midday trading following the announcement. Google’s parent company Alphabet, on the other hand, had the opposite trend. The stock price rose 1.3%. xAI is not publicly traded.
The announcement follows an agreement between the Department of Defense and seven major tech companies: Google, Microsoft, Amazon Web Services, Nvidia, OpenAI, Reflection, and SpaceX to use AI systems across sensitive computer networks.
The Pentagon said the agreement will provide resources to “enhance warfighter decision-making in complex operational environments.”
Notably absent from the list is AI company Anthropic, which engaged in a public debate and legal battle with the Trump administration over the ethics and safety of using AI in war.
