Close Menu
  • Home
  • AI
  • Art & Style
  • Economy
  • Entertainment
  • International
  • Market
  • Opinion
  • Politics
  • Sports
  • Trump
  • US
  • World
What's Hot

What we learned on day 9 of the US-Israel war against Iran

March 8, 2026

Australian Grand Prix: After the race in Melbourne, Lewis Hamilton says it’s “not impossible” for Ferrari to catch Mercedes in a “straight attack” | F1 News

March 8, 2026

China says ‘thorough preparations’ are needed for meeting between President Trump and Xi

March 8, 2026
Facebook X (Twitter) Instagram
WhistleBuzz – Smart News on AI, Business, Politics & Global Trends
Facebook X (Twitter) Instagram
  • Home
  • AI
  • Art & Style
  • Economy
  • Entertainment
  • International
  • Market
  • Opinion
  • Politics
  • Sports
  • Trump
  • US
  • World
WhistleBuzz – Smart News on AI, Business, Politics & Global Trends
Home » AI roadmap, if anyone is willing to listen.
AI

AI roadmap, if anyone is willing to listen.

Editor-In-ChiefBy Editor-In-ChiefMarch 7, 2026No Comments4 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email


The U.S. government’s break with Anthropic revealed that there are no consistent rules governing artificial intelligence, but a bipartisan coalition of thinkers has put together something the government has so far refused to create: a framework for what responsible AI development should actually look like.

Although the pro-human declaration was finalized before last week’s standoff between the Pentagon and humanity, the collision of the two events was not forgotten by all involved.

“Something very remarkable has happened in America over the last four months,” Max Tegmark, an MIT physicist and AI researcher who helped organize the effort, said in a conversation with the editors. “Suddenly, polls reveal that 95% of all Americans oppose unregulated competition to superintelligence.”

A newly released document signed by hundreds of experts, former government officials and celebrities begins with the blunt observation that humanity is at a crossroads. One path, which the Declaration calls “competition of substitution”, leads to the replacement of humans first as workers and then as decision-makers, as power accrues to unaccountable organizations and their machines. The other will lead to AI that will greatly expand human potential.

The latter scenario relies on five key pillars: holding humans accountable, avoiding the concentration of power, protecting the human experience, protecting individual freedom, and holding AI companies legally accountable. Among its stronger provisions is a complete ban on superintelligence development until there is a scientific consensus that it is safe and capable of genuine democratic buy-in. Powerful systems require an off switch. and a prohibition on architectures capable of self-replication, autonomous self-improvement, or shutdown tolerance.

The publication of this declaration coincides with a time when its urgency is more easily understood. On the last Friday in February, Secretary of Defense Pete Hegseth designated Anthropic, whose AI is already running on classified military platforms, as a “supply chain risk.” The company denied the Pentagon permission to use the technology unrestricted, usually given to companies with ties to China. Hours later, OpenAI terminated its own agreement with the Department of Defense, which legal experts say will be difficult to enforce in any meaningful way. This reveals just how costly Congressional inaction on AI has become.

Dean Ball, a senior fellow at the American Foundation for Innovation, later told the New York Times: “This is not just a fight over a contract. This is the first conversation we’ve had as a nation about managing our AI systems.”

tech crunch event

San Francisco, California
|
October 13-15, 2026

Tegmark came up with an analogy that most people understood when we talked about it. “We don’t have to worry that some drug company is going to come out with another drug that does tremendous harm before people figure out how to make that drug safe, because the FDA won’t allow anything to come out until it’s safe enough,” he said.

Turf wars in Washington rarely generate enough public pressure to change the law. Rather, Tegmark believes that child safety is the most likely solution to the current impasse. In fact, the declaration calls for mandatory pre-deployment testing of AI products, especially chatbots and companion apps aimed at younger users, to cover risks such as increased suicidal ideation, poor mental health and emotional manipulation.

“If a creepy old man pretends to be a girl and sends an email to an 11-year-old boy trying to convince the boy to commit suicide, he could be sent to prison for that,” Tegmark said. “We already have laws. It’s illegal. So why would it be any different if a machine does it?”

He believes that once the principles of pre-release testing are established for children’s products, its scope will almost inevitably expand. “People will come and say, let’s add some other requirements. Maybe we should also test that this doesn’t contribute to terrorist bioweapon production. Maybe we should test to make sure superintelligence agencies don’t have the ability to overthrow the U.S. government.”

It’s no small feat that former President Trump adviser Steve Bannon and President Obama’s national security adviser Susan Rice signed the same document, along with former Chairman of the Joint Chiefs of Staff Mike Mullen and progressive faith leaders.

“What they agree on, of course, is that they’re all humans,” Tegmark says. “When it comes to whether you want a human future or a machine future, of course they’re going to be on the same side.”



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Editor-In-Chief
  • Website

Related Posts

Google just awarded Sundar Pichai a $692 million compensation package

March 7, 2026

Grammarly’s “Expert Reviews” Just Lacking Actual Experts

March 7, 2026

Caitlin Kalinowski, head of OpenAI’s robotics division, resigns following agreement with Department of Defense

March 7, 2026
Add A Comment

Comments are closed.

News

The Trump administration denies reports that Iran has captured American soldiers. Israel and Iran conflict news

By Editor-In-ChiefMarch 7, 2026

Ali Larijani, head of Iran’s National Security Council, claimed that his country has been holding…

President Trump says US doesn’t need British aircraft carrier in Iran war | Military News

March 7, 2026

Cuba announces successful restoration of power plant after massive power outage | Cuba Energy News

March 7, 2026
Top Trending

AI roadmap, if anyone is willing to listen.

By Editor-In-ChiefMarch 7, 2026

The U.S. government’s break with Anthropic revealed that there are no consistent…

Google just awarded Sundar Pichai a $692 million compensation package

By Editor-In-ChiefMarch 7, 2026

Sundar Pichai’s new salary package could be worth $692 million. Alphabet has…

Grammarly’s “Expert Reviews” Just Lacking Actual Experts

By Editor-In-ChiefMarch 7, 2026

Grammarly’s latest additions aim to improve your writing with the help of…

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Welcome to WhistleBuzz.com (“we,” “our,” or “us”). Your privacy is important to us. This Privacy Policy explains how we collect, use, disclose, and safeguard your information when you visit our website https://whistlebuzz.com/ (the “Site”). Please read this policy carefully to understand our views and practices regarding your personal data and how we will treat it.

Facebook X (Twitter) Instagram Pinterest YouTube

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Facebook X (Twitter) Instagram Pinterest
  • Home
  • Advertise With Us
  • Contact US
  • DMCA Policy
  • Privacy Policy
  • Terms & Conditions
  • About US
© 2026 whistlebuzz. Designed by whistlebuzz.

Type above and press Enter to search. Press Esc to cancel.