Close Menu
  • Home
  • AI
  • Art & Style
  • Economy
  • Entertainment
  • International
  • Market
  • Opinion
  • Politics
  • Sports
  • Trump
  • US
  • World
What's Hot

Tottenham: Roberto De Zerbi has no plans to take over the Premier League club before the end of the season | Soccer News

March 25, 2026

Bernie Sanders and AOC propose ban on data center construction

March 25, 2026

Why did British government bonds bear the brunt of Iran’s collapse?

March 25, 2026
Facebook X (Twitter) Instagram
WhistleBuzz – Smart News on AI, Business, Politics & Global Trends
Facebook X (Twitter) Instagram
  • Home
  • AI
  • Art & Style
  • Economy
  • Entertainment
  • International
  • Market
  • Opinion
  • Politics
  • Sports
  • Trump
  • US
  • World
WhistleBuzz – Smart News on AI, Business, Politics & Global Trends
Home » Anthropic’s lawsuit against the Department of Defense could create room for AI regulation | Business and Economics News
Trump

Anthropic’s lawsuit against the Department of Defense could create room for AI regulation | Business and Economics News

Editor-In-ChiefBy Editor-In-ChiefMarch 25, 2026No Comments8 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email


SAN FRANCISCO, USA: A California judge has set the stage for a possible victory for Anthropic, which promotes regulation of artificial intelligence-based weapons, but it is a drawback for President Donald Trump’s administration and moves the company one step closer to not losing billions of dollars in government contracts.

The Trump administration has designated Anthropic a “supply chain risk” because of the company’s stance on increased regulation, which would block the company from certain military contracts.

Recommended stories

list of 4 itemsend of list

The U.S. Department of Defense may be illegally trying to punish Anthropic Inc. for trying to limit the use of artificial intelligence (AI) models for weapons and mass surveillance without human supervision, a district judge said.

“It appears to be an attempt to neutralize humanity,” Judge Rita Lin of the Northern California District Court said Tuesday.

Legal analysts say this could set the stage for a preliminary injunction against Anthropic from the Department of Defense’s finding of supply chain risks.

Charlie Block, a senior fellow at the Institute for Law and AI, a Boston-based think tank, said of the Pentagon’s designation of humans as a supply chain risk that “their stated objective is not fully supported by the Department of the Army.”

It is the first time a U.S. company has received the designation, which would mean termination of government contracts as well as government contractor contracts.

On March 17, the Pentagon said in court that Anthropic’s stance that its products cannot be used for AI-powered weapons or domestic surveillance without human oversight would undermine the company’s “ability to manage its own lawful operations.”

Anthropic’s lawsuit seeking to remove the designation revolves around the scope of AI’s capabilities, how it forms life, and whether it should be regulated.

“This is an opportunity to think about the kind of relationship we want between governments and businesses, and what rights people have,” said Robert Trager, co-director of the Oxford Martin AI Governance Initiative at the University of Oxford.

“Technology is moving like a freight train in the U.S., and the idea of ​​human oversight is becoming more and more difficult,” said Alison Taylor, a clinical associate professor of business and society at New York University’s Stern School of Management.

Over the past two weeks, various technology companies, think tanks, and legal organizations have filed court briefs supporting Anthropic’s position and calling for the oversight and regulation of AI for weapons and mass surveillance. That support ranges from employees at Microsoft and Anthropic competitors OpenAI and Google to Catholic moral theologians and ethicists.

Engineers from OpenAI and Google DeepMind said in a personal briefing that the case is of “seismic significance to our industry” and that regulation is critical because AI models’ “chain of inference is often hidden from operators, the inner workings are opaque even to developers, and decisions made in lethal situations are irreversible.”

Given these concerns, New York University’s Taylor said, “Anthropic is making a risky but good bet that by positioning itself as an ethical AI company, it can help shape regulation when it actually happens.”

Hallucinations and other problems

Anthropic works extensively on contracts with the Department of Defense, and its Claude Gov model is integrated into Palantir’s Project Maven, helping with tasks such as data analysis, target selection, and reportedly includes the ongoing US-Israel war against Iran.

Currently, no AI-enabled weapons are used without human supervision, but Anthropic’s contract with the Department of Defense requires continued human supervision because AI models can cause hallucinations and are not yet completely reliable. Hallucinations are a concern with all AI models, but the potential harm from their use in weapons could be enormous.

Mary Cummings, a professor of civil engineering in George Mason University’s School of Engineering and Computing and director of the Mason Autonomy Robotics Center, found that half of the accidents involving self-driving cars in San Francisco, where most self-driving cars are deployed, are caused by the car accidentally braking when there is an object in front of it, causing a car behind to collide.

“We call this phantom braking, and it is caused by hallucinations,” she told Al Jazeera.

In a February paper, she warned that “embedding AI in weapons will face reliability issues similar to self-driving cars, including hallucinations.”

Annika is an assistant professor at Northeastern University Bouvet School of Health Sciences who studies the impact of AI on health systems. Schone says, “The concern is not just illusions. There can be different workflows, data biases, or model biases in a model like this. We still don’t know how secure a model is against foreign manipulation. There are so many factors to this, and we still don’t have a consensus on what we consider safe and what isn’t.”

Given that AI models, including Governor Claude, were not created by the military, they need to be tested to see how reliable they are when integrated into military systems, said Arok Mehta, director of the Wadhwani AI Center at the Center for Strategic and International Studies, a Washington, D.C.-based think tank.

“Evaluation and benchmark testing may be delayed; the model is saturated in our testing system.”

Some say it’s not so much the technology as the way it’s used that can cause errors.

“I remember in the[early 2020s]there was hope that having tools like this would reduce civilian deaths,” says Andrew Reddy, associate research professor at the Goldman School of Public Policy at the University of California, Berkeley, and founder of the Berkeley Risk and Security Lab.

“But that didn’t really happen, because it depends on the data you provide. The question is not whether it’s AI or not, but what is a legitimate target,” he says of how military personnel choose targets from the range provided by the tool.

As for domestic mass surveillance, it is unclear whether the Pentagon currently uses AI for that purpose, although researchers at OpenAI and Google have highlighted concerns about this in court filings.

It is said to be able to monitor the entire U.S. population by collating data from more than 70 million cameras and credit card transaction histories. “Even the recognition that such capabilities exist has a chilling effect on democratic participation.”

“Public relations victory”

Before the trial and amid mounting public criticism, Anthropic was said to have had a deeper relationship with the Pentagon than many of its competitors, which benefited both parties.

“The Department of Defense believes that Anthropic has the best product for military use, so they are pressuring the company to continue using it,” said Mehta of CSIS.

For Anthropic, “the economics are very difficult for the AI ​​industry, so we need a solid public sector business with multi-billion dollar contracts,” he says.

OpenAI worked with the Department of Defense on Anthropic’s behalf shortly after Anthropic’s contract ended. But Anthropic appears to have won “a public relations victory, if not a real one,” said New York University’s Taylor.

Its positioning as an ethical AI company may have helped it gain public popularity. Claude’s download numbers skyrocketed in the weeks after his contract was terminated.

But the fact that companies have to draw the line shows that governments are failing to do so, said Brianna Rosen, executive director of the University of Oxford’s Cyber ​​Technology Policy Programme.

“For the first time, the United States is using AI to generate targets in a large-scale combat operation in Iran,” she says. “And lawmakers are still debating whether to draw a red line on fully autonomous weapons. Lack of governance is itself a national security risk.”

The debate over regulating AI weapons will only widen the gap between public concerns and reluctance to over-regulate AI innovation in other areas. Polls show Americans are concerned about potential job losses and the effects of climate change due to AI. A Kunipac University poll conducted in April 2025 found that 69% of Americans think the government can do more to regulate AI.

This rift has led to the AI ​​industry emerging as a major funder of the 2026 midterm elections. Leading the Future, a super PAC that has received more than $100 million from OpenAI President Greg Brockman, Palantir co-founder Joe Lonsdale and others, funded an ad against Alex Boaz, a New York state representative running for Congress. Boas sponsored the RAISE Act, which would require AI developers to disclose safety protocols and accidents.

Anthropic announced in February that it would donate $20 million to Public First Action, a PAC supporting candidates who support AI regulation, including Boas.

While AI companies aim to develop industry standards for testing and evaluating models, Anthropic is pushing for regulation because bad actors could violate such non-legally binding standards, said Law Institute and AI’s Block.

Experts say such events could shape the direction of AI regulation between the court’s decision in the Anthropic case and the next midterm elections.

“This could create room for more cautious policy development,” said Oxford University’s Rosen.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Editor-In-Chief
  • Website

Related Posts

Democrats win Florida House seat in President Trump’s Mar-a-Lago district | Election News

March 25, 2026

Is Iran’s negotiating position stronger than it was when the US-Israel war began? | US-Israel War on Iran News

March 25, 2026

Philippine President declares energy emergency in response to Iran war | US and Israel’s war against Iran News

March 24, 2026
Add A Comment

Comments are closed.

News

Anthropic’s lawsuit against the Department of Defense could create room for AI regulation | Business and Economics News

By Editor-In-ChiefMarch 25, 2026

SAN FRANCISCO, USA: A California judge has set the stage for a possible victory for…

Democrats win Florida House seat in President Trump’s Mar-a-Lago district | Election News

March 25, 2026

Is Iran’s negotiating position stronger than it was when the US-Israel war began? | US-Israel War on Iran News

March 25, 2026
Top Trending

Bernie Sanders and AOC propose ban on data center construction

By Editor-In-ChiefMarch 25, 2026

The explosion of new data center projects in the United States is…

Meta uses AI to make shopping on Instagram and Facebook easier

By Editor-In-ChiefMarch 25, 2026

Meta aims to leverage AI’s ability to inform and potentially influence shoppers…

Granola raises $125M, expands from meeting note-taking tool to enterprise AI app, reaches $1.5B valuation

By Editor-In-ChiefMarch 25, 2026

Users may not like a bot visibly taking notes during a meeting,…

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Welcome to WhistleBuzz.com (“we,” “our,” or “us”). Your privacy is important to us. This Privacy Policy explains how we collect, use, disclose, and safeguard your information when you visit our website https://whistlebuzz.com/ (the “Site”). Please read this policy carefully to understand our views and practices regarding your personal data and how we will treat it.

Facebook X (Twitter) Instagram Pinterest YouTube

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Facebook X (Twitter) Instagram Pinterest
  • Home
  • Advertise With Us
  • Contact US
  • DMCA Policy
  • Privacy Policy
  • Terms & Conditions
  • About US
© 2026 whistlebuzz. Designed by whistlebuzz.

Type above and press Enter to search. Press Esc to cancel.