Close Menu
  • Home
  • AI
  • Art & Style
  • Economy
  • Entertainment
  • International
  • Market
  • Opinion
  • Politics
  • Sports
  • Trump
  • US
  • World
What's Hot

Last 24 hours: Save up to $500 on Disrupt 2026 passes

April 10, 2026

Consumer sentiment hits record low, Iran war raises inflation concerns

April 10, 2026

China’s producer prices turn positive as inflation accelerates due to Iranian oil shock

April 10, 2026
Facebook X (Twitter) Instagram
Smart Breaking News on AI, Business, Politics & Global Trends | WhistleBuzz
Facebook X (Twitter) Instagram
  • Home
  • AI
  • Art & Style
  • Economy
  • Entertainment
  • International
  • Market
  • Opinion
  • Politics
  • Sports
  • Trump
  • US
  • World
Smart Breaking News on AI, Business, Politics & Global Trends | WhistleBuzz
Home » A week after President Trump announced the end of the relationship, the Pentagon told Anthropic that the two sides were largely in agreement, a new court filing reveals.
AI

A week after President Trump announced the end of the relationship, the Pentagon told Anthropic that the two sides were largely in agreement, a new court filing reveals.

Editor-In-ChiefBy Editor-In-ChiefMarch 20, 2026No Comments5 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email


Late Friday afternoon, Anthropic filed two affidavits in California federal court pushing back against the Pentagon’s claims that AI companies pose an “unacceptable risk to national security,” arguing that the government’s lawsuit is based on technical misunderstandings and claims that were not actually raised during the months of negotiations that preceded the dispute.

These declarations were filed along with Anthropic’s brief response in its lawsuit against the Department of Defense and were filed this Tuesday, ahead of a hearing before Judge Rita Lin in San Francisco on March 24th.

The dispute dates back to late February, when President Trump and Secretary of Defense Pete Hegseth publicly announced they were cutting ties with Anthropic after the company refused to allow unrestricted military use of its AI technology.

The declaration was submitted by Sarah Heck, Anthropic’s head of policy, and Thiyagu Ramasamy, the company’s head of public sector.

Mr. Heck is a former National Security Council official who worked in the White House under the Obama administration before joining Stripe and then Anthropic, where he ran the company’s government relations and policy operations. She was personally present at the Feb. 24 meeting where CEO Dario Amodei met with Defense Secretary Hegseth and Undersecretary of Defense Emil Michael.

In her declaration, Heck points out what she describes as the central falsehood of the government filings: that Anthropic claims some type of approval role regarding military operations. According to her, that claim is simply not true. “During Anthropic’s negotiations with the department, at no time did I or any other Anthropic employee indicate that the company wanted such a role,” she wrote.

She also noted that the Pentagon’s concerns that Anthropic could disable or change its technology while in operation were never raised during negotiations. Instead, she says, it first came to light in the government’s court filing, and Anthropic was not given a chance to respond.

tech crunch event

San Francisco, California
|
October 13-15, 2026

Another notable detail of the Heck declaration is that on March 4, the day after the Pentagon formalized the supply chain risk designation for Anthropic, Under Secretary Michael emailed Amodei to say that the two countries were “very close” on the administration’s positions on autonomous weapons and mass surveillance of American citizens, two issues that the administration currently cites as evidence that Anthropic poses a national security threat.

This email, which Heck attached to his manifesto, is worth reading in conjunction with what Michael said publicly in the days that followed. On March 5, Amodei released a statement saying the company had had “productive conversations” with the Department of Defense. The next day, Michael posted on X that there were “no active negotiations between the Department of the Army and Anthropic.” A week later, he told CNBC there was “no chance” of new talks.

What Heck seems to be saying is this: If Anthropic’s positions on these two issues make it a national security threat, why did the Pentagon’s own officials say shortly after the designation was finalized that the two countries were largely in agreement on these very issues? (She never said the government used the designation as a bargaining chip, but the timeline she provided leaves the question open.)

Mr. Ramasamy brings a different kind of expertise to this case. Prior to joining Anthropic in 2025, he spent six years at Amazon Web Services managing AI deployments for government customers, including sensitive environments. At Anthropic, he is credited with building a team that brought the Claude model to the national security and defense scene, including a $200 million contract with the Department of Defense announced last summer.

His declaration continues the government’s claim that Anthropic could theoretically interfere with military operations by disabling technology or otherwise changing its behavior, but Ramasamy says that is technically impossible. He said that once Claude was deployed within an “air-gapped” system protected by the government and operated by third-party contractors, Anthropic would not be able to access it. There are no remote kill switches, no backdoors, and no mechanisms to push unauthorized updates. He suggested that any kind of “operational veto” is a fiction, explaining that any changes to the model would require explicit approval and activation by the Pentagon.

Anthropic says it can’t even see what government users are entering into the system, much less extract that data.

Ramasamy also disputes the government’s claim that Anthropic’s foreign employment poses a security risk to the company. He noted that Anthropic employees undergo classified information screening, a background check process required for access to classified information by the U.S. government, and added in the declaration that “to my knowledge” Anthropic is the only AI company whose AI models designed to operate in classified environments are actually built by qualified personnel.

Anthropic’s lawsuit alleges that the supply chain risk designation, the first ever applied to a U.S. company, amounts to government retaliation for the company’s public views on the safety of AI and violates the First Amendment.

In a 40-page filing earlier this week, the government rejected that framework outright, saying Anthropic’s denial of any legitimate military use of its technology was a business decision, not protected speech, and that the designation was a direct national security call, not a punishment for the company’s views.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Editor-In-Chief
  • Website

Related Posts

Last 24 hours: Save up to $500 on Disrupt 2026 passes

April 10, 2026

Is Anthropic restricting the release of Mythos to protect the internet? Or Anthropic?

April 9, 2026

Meta AI app rises to #5 in App Store after Muse Spark launch

April 9, 2026
Add A Comment

Comments are closed.

News

Cuban President Rebellious Despite Trump’s Pressure to Resign | Political News

By Editor-In-ChiefApril 10, 2026

Cuba’s Díaz-Canel vows to resist US pressure to resign as President Trump intensifies threats and…

‘Closer to collapse than ever’: Can NATO survive if President Trump withdraws the US? | NATO News

April 10, 2026

Iran War: What’s happening 42 days after the US and Israeli attack? |US-Israel war against Iran News

April 10, 2026
Top Trending

Last 24 hours: Save up to $500 on Disrupt 2026 passes

By Editor-In-ChiefApril 10, 2026

This one. The clock is running low. Tonight is your last chance…

Is Anthropic restricting the release of Mythos to protect the internet? Or Anthropic?

By Editor-In-ChiefApril 9, 2026

Anthropic announced this week that it has restricted the release of its…

Meta AI app rises to #5 in App Store after Muse Spark launch

By Editor-In-ChiefApril 9, 2026

Meta’s AI apps have seen a significant increase in installs since the…

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Welcome to WhistleBuzz.com (“we,” “our,” or “us”). Your privacy is important to us. This Privacy Policy explains how we collect, use, disclose, and safeguard your information when you visit our website https://whistlebuzz.com/ (the “Site”). Please read this policy carefully to understand our views and practices regarding your personal data and how we will treat it.

Facebook X (Twitter) Instagram Pinterest YouTube

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Facebook X (Twitter) Instagram Pinterest
  • Home
  • Advertise With Us
  • Contact US
  • DMCA Policy
  • Privacy Policy
  • Terms & Conditions
  • About US
© 2026 whistlebuzz. Designed by whistlebuzz.

Type above and press Enter to search. Press Esc to cancel.