Close Menu
  • Home
  • AI
  • Art & Style
  • Economy
  • Entertainment
  • International
  • Market
  • Opinion
  • Politics
  • Sports
  • Trump
  • US
  • World
What's Hot

Tiger Woods faces a much bigger challenge than deciding whether to play in the Masters after car accident, says Paul McGinley Golf News

March 29, 2026

Nikkei 225, Kospi, Hang Seng Index, crude oil

March 29, 2026

Tottenham’s fears of being relegated from the Premier League are real – Shocking statistics, surprising subs and a look at the ‘final seven games’ | Soccer News

March 29, 2026
Facebook X (Twitter) Instagram
WhistleBuzz – Smart News on AI, Business, Politics & Global Trends
Facebook X (Twitter) Instagram
  • Home
  • AI
  • Art & Style
  • Economy
  • Entertainment
  • International
  • Market
  • Opinion
  • Politics
  • Sports
  • Trump
  • US
  • World
WhistleBuzz – Smart News on AI, Business, Politics & Global Trends
Home » Tech workers urge Pentagon and Congress to rescind ‘humanity’ label as supply chain risk
AI

Tech workers urge Pentagon and Congress to rescind ‘humanity’ label as supply chain risk

Editor-In-ChiefBy Editor-In-ChiefMarch 2, 2026No Comments3 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email


Hundreds of tech workers have signed an open letter calling on the Department of Defense to rescind the designation of humans as a “supply chain risk.” The letter also asks Congress to intervene and “consider whether the exercise of these special powers against U.S. technology companies is appropriate.”

The letter includes signatories from major technology and venture capital firms, including OpenAI, Slack, IBM, Cursor, and Salesforce Ventures. This follows a dispute between the Department of Defense and Anthropic last week after the AI ​​Institute refused to give the military unrestricted access to its AI systems.

Anthropic’s two red lines in negotiations with the Pentagon were that it did not want its technology to be used for mass surveillance of Americans or to power autonomous weapons that could determine targets and fire decisions without human intervention. The Department of Defense said it has no plans to do these things, but believes they should not be restricted by vendor regulations.

President Donald Trump on Friday directed federal agencies to stop using Anthropic’s technology after a six-month transition period after Anthropic CEO Dario Amodei did not give in to Hegseth’s threats. Hegseth said he intends to carry out his threat and designate Anthropic as a supply chain risk. This designation is typically reserved for foreign adversaries, barring them from partnering with government agencies or companies that do business with the Department of Defense.

“Effective immediately, any contractor, supplier, or partner doing business with the U.S. military may not engage in any commercial activity with Anthropic,” Hegseth wrote in a post Friday.

But posting on X doesn’t automatically make Anthropic a supply chain risk. The government will be required to complete a risk assessment and notify Congress before military partners sever ties with Anthropic or its products. Anthropic said in a blog post that the destination is “legally unsound” and that it “will challenge the supply chain risk designation in court.”

Many in the industry view the administration’s treatment of Anthropic as harsh and clear retaliation.

tech crunch event

San Francisco, California
|
October 13-15, 2026

“If the parties cannot agree on terms, the normal course of action is to part ways and work with a competitor,” the open letter said. “This situation sets a dangerous precedent. Punishing U.S. companies that refuse to accept contract changes sends a clear message to all U.S. technology companies: Accept whatever terms the government demands or face retaliation.”

Beyond concerns about the government’s harsh treatment of Anthropic, many in the industry remain concerned about government overreach and the potential for AI to be used for illicit purposes.

OpenAI researcher Boaz Barak wrote in a social media post on Monday that preventing governments from using AI to carry out mass surveillance is also his “personal red line” and that “it should belong to all of us.”

Shortly after President Trump publicly attacked Anthropic, OpenAI announced it had reached a unique agreement to deploy its models into the Pentagon’s classified environments. OpenAI CEO Sam Altman said last week that the company has the same red lines as Anthropic.

“If anything good can come out of last week’s events, it will be if we in the AI ​​industry begin to treat the problem of governments using AI for abuse and surveillance of their own citizens as a catastrophic risk in itself,” Barak wrote. “We’ve had successful assessments, mitigations, and processes for risks like biological weapons and cybersecurity. Let’s use the same process here.”



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Editor-In-Chief
  • Website

Related Posts

Sora shutdown could be a reality check moment for AI video

March 29, 2026

Bluesky tackles AI with Attie, an app that creates custom feeds

March 28, 2026

Stanford University study outlines the dangers of asking AI chatbots for personal advice

March 28, 2026
Add A Comment

Comments are closed.

News

Republican Mace says sending U.S. troops to Iran must be approved by Congress | U.S.-Israel war against Iran News

By Editor-In-ChiefMarch 29, 2026

Republican U.S. Representative Nancy Mace said Congress should have a say in any decisions about…

‘Nowhere is truly safe’: Iranian dissidents grapple with US war in Iran | US and Israel’s war against Iran News

March 29, 2026

Vice President J.D. Vance tops CPAC straw poll and becomes U.S. president in 2028 | Election News

March 28, 2026
Top Trending

Sora shutdown could be a reality check moment for AI video

By Editor-In-ChiefMarch 29, 2026

OpenAI announced this week that it is shutting down its Sora app…

Bluesky tackles AI with Attie, an app that creates custom feeds

By Editor-In-ChiefMarch 28, 2026

Bluesky’s team built another app. This time, it’s not a social network,…

Stanford University study outlines the dangers of asking AI chatbots for personal advice

By Editor-In-ChiefMarch 28, 2026

There has been much discussion about the tendency of AI chatbots to…

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Welcome to WhistleBuzz.com (“we,” “our,” or “us”). Your privacy is important to us. This Privacy Policy explains how we collect, use, disclose, and safeguard your information when you visit our website https://whistlebuzz.com/ (the “Site”). Please read this policy carefully to understand our views and practices regarding your personal data and how we will treat it.

Facebook X (Twitter) Instagram Pinterest YouTube

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Facebook X (Twitter) Instagram Pinterest
  • Home
  • Advertise With Us
  • Contact US
  • DMCA Policy
  • Privacy Policy
  • Terms & Conditions
  • About US
© 2026 whistlebuzz. Designed by whistlebuzz.

Type above and press Enter to search. Press Esc to cancel.