Close Menu
  • Home
  • AI
  • Art & Style
  • Economy
  • Entertainment
  • International
  • Market
  • Opinion
  • Politics
  • Sports
  • Trump
  • US
  • World
What's Hot

What we learned on the 20th day of the US-Israel war against Iran

March 20, 2026

Belgium Darts Open: Former world champions Michael Smith and Raymond van Barneveld lost in the first round | Darts News

March 20, 2026

Agent AI takes center stage

March 20, 2026
Facebook X (Twitter) Instagram
WhistleBuzz – Smart News on AI, Business, Politics & Global Trends
Facebook X (Twitter) Instagram
  • Home
  • AI
  • Art & Style
  • Economy
  • Entertainment
  • International
  • Market
  • Opinion
  • Politics
  • Sports
  • Trump
  • US
  • World
WhistleBuzz – Smart News on AI, Business, Politics & Global Trends
Home » A week after President Trump announced the end of the relationship, the Pentagon told Anthropic that the two sides were largely in agreement, a new court filing reveals.
AI

A week after President Trump announced the end of the relationship, the Pentagon told Anthropic that the two sides were largely in agreement, a new court filing reveals.

Editor-In-ChiefBy Editor-In-ChiefMarch 20, 2026No Comments5 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email


Late Friday afternoon, Anthropic filed two affidavits in California federal court pushing back against the Pentagon’s claims that AI companies pose an “unacceptable risk to national security,” arguing that the government’s lawsuit is based on technical misunderstandings and claims that were not actually raised during the months of negotiations that preceded the dispute.

These declarations were filed along with Anthropic’s brief response in its lawsuit against the Department of Defense and were filed this Tuesday, ahead of a hearing before Judge Rita Lin in San Francisco on March 24th.

The dispute dates back to late February, when President Trump and Secretary of Defense Pete Hegseth publicly announced they were cutting ties with Anthropic after the company refused to allow unrestricted military use of its AI technology.

The declaration was submitted by Sarah Heck, Anthropic’s head of policy, and Thiyagu Ramasamy, the company’s head of public sector.

Mr. Heck is a former National Security Council official who worked in the White House under the Obama administration before joining Stripe and then Anthropic, where he ran the company’s government relations and policy operations. She was personally present at the Feb. 24 meeting where CEO Dario Amodei met with Defense Secretary Hegseth and Undersecretary of Defense Emil Michael.

In her declaration, Heck points out what she describes as the central falsehood of the government filings: that Anthropic claims some type of approval role regarding military operations. According to her, that claim is simply not true. “During Anthropic’s negotiations with the department, at no time did I or any other Anthropic employee indicate that the company wanted such a role,” she wrote.

She also noted that the Pentagon’s concerns that Anthropic could disable or change its technology while in operation were never raised during negotiations. Instead, she says, it first came to light in the government’s court filing, and Anthropic was not given a chance to respond.

tech crunch event

San Francisco, California
|
October 13-15, 2026

Another notable detail of the Heck declaration is that on March 4, the day after the Pentagon formalized the supply chain risk designation for Anthropic, Under Secretary Michael emailed Amodei to say that the two countries were “very close” on the administration’s positions on autonomous weapons and mass surveillance of American citizens, two issues that the administration currently cites as evidence that Anthropic poses a national security threat.

This email, which Heck attached to his manifesto, is worth reading in conjunction with what Michael said publicly in the days that followed. On March 5, Amodei released a statement saying the company had had “productive conversations” with the Department of Defense. The next day, Michael posted on X that there were “no active negotiations between the Department of the Army and Anthropic.” A week later, he told CNBC there was “no chance” of new talks.

What Heck seems to be saying is this: If Anthropic’s positions on these two issues make it a national security threat, why did the Pentagon’s own officials say shortly after the designation was finalized that the two countries were largely in agreement on these very issues? (She never said the government used the designation as a bargaining chip, but the timeline she provided leaves the question open.)

Mr. Ramasamy brings a different kind of expertise to this case. Prior to joining Anthropic in 2025, he spent six years at Amazon Web Services managing AI deployments for government customers, including sensitive environments. At Anthropic, he is credited with building a team that brought the Claude model to the national security and defense scene, including a $200 million contract with the Department of Defense announced last summer.

His declaration continues the government’s claim that Anthropic could theoretically interfere with military operations by disabling technology or otherwise changing its behavior, but Ramasamy says that is technically impossible. He said that once Claude was deployed within an “air-gapped” system protected by the government and operated by third-party contractors, Anthropic would not be able to access it. There are no remote kill switches, no backdoors, and no mechanisms to push unauthorized updates. He suggested that any kind of “operational veto” is a fiction, explaining that any changes to the model would require explicit approval and activation by the Pentagon.

Anthropic says it can’t even see what government users are entering into the system, much less extract that data.

Ramasamy also disputes the government’s claim that Anthropic’s foreign employment poses a security risk to the company. He noted that Anthropic employees undergo classified information screening, a background check process required for access to classified information by the U.S. government, and added in the declaration that “to my knowledge” Anthropic is the only AI company whose AI models designed to operate in classified environments are actually built by qualified personnel.

Anthropic’s lawsuit alleges that the supply chain risk designation, the first ever applied to a U.S. company, amounts to government retaliation for the company’s public views on the safety of AI and violates the First Amendment.

In a 40-page filing earlier this week, the government rejected that framework outright, saying Anthropic’s denial of any legitimate military use of its technology was a business decision, not protected speech, and that the designation was a direct national security call, not a punishment for the company’s views.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Editor-In-Chief
  • Website

Related Posts

Microsoft rolls back parts of bloated Copilot AI on Windows

March 20, 2026

Nvidia has an OpenClaw strategy. you?

March 20, 2026

President Trump’s AI framework targets state laws, shifting the burden of child safety onto parents

March 20, 2026
Add A Comment

Comments are closed.

News

Former Minister Gamboa becomes the first Costa Rican minister to be extradited to the US | Crime News

By Editor-In-ChiefMarch 20, 2026

For the first time in recent history, Costa Rica has extradited some of its citizens…

Colombian President Gustavo Petro under investigation in the US for drug-related charges | Donald Trump News

March 20, 2026

US judge sided with New York Times against Pentagon journalism policy | Donald Trump News

March 20, 2026
Top Trending

A week after President Trump announced the end of the relationship, the Pentagon told Anthropic that the two sides were largely in agreement, a new court filing reveals.

By Editor-In-ChiefMarch 20, 2026

Late Friday afternoon, Anthropic filed two affidavits in California federal court pushing…

Microsoft rolls back parts of bloated Copilot AI on Windows

By Editor-In-ChiefMarch 20, 2026

Microsoft on Friday announced a series of changes focused on improving the…

Nvidia has an OpenClaw strategy. you?

By Editor-In-ChiefMarch 20, 2026

CEO Jensen Huang took to the stage at Nvidia’s GTC conference this…

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Welcome to WhistleBuzz.com (“we,” “our,” or “us”). Your privacy is important to us. This Privacy Policy explains how we collect, use, disclose, and safeguard your information when you visit our website https://whistlebuzz.com/ (the “Site”). Please read this policy carefully to understand our views and practices regarding your personal data and how we will treat it.

Facebook X (Twitter) Instagram Pinterest YouTube

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Facebook X (Twitter) Instagram Pinterest
  • Home
  • Advertise With Us
  • Contact US
  • DMCA Policy
  • Privacy Policy
  • Terms & Conditions
  • About US
© 2026 whistlebuzz. Designed by whistlebuzz.

Type above and press Enter to search. Press Esc to cancel.