Close Menu
  • Home
  • AI
  • Art & Style
  • Economy
  • Entertainment
  • International
  • Market
  • Opinion
  • Politics
  • Sports
  • Trump
  • US
  • World
What's Hot

Allegiant CEO defends low-cost carrier plans as Sunshine deal closes

May 14, 2026

China’s Mr. Xi warns President Trump about Taiwan at Beijing summit | Donald Trump News

May 14, 2026

Max Verstappen: Why the Red Bull F1 driver continues his career outside of F1 but will compete in the Nürburgring 24 Hours this weekend | F1 News

May 14, 2026
Facebook X (Twitter) Instagram
Smart Breaking News on AI, Business, Politics & Global Trends | WhistleBuzz
Facebook X (Twitter) Instagram
  • Home
  • AI
  • Art & Style
  • Economy
  • Entertainment
  • International
  • Market
  • Opinion
  • Politics
  • Sports
  • Trump
  • US
  • World
Smart Breaking News on AI, Business, Politics & Global Trends | WhistleBuzz
Home » Elon Musk’s lawsuit exposes OpenAI’s safety record
AI

Elon Musk’s lawsuit exposes OpenAI’s safety record

Editor-In-ChiefBy Editor-In-ChiefMay 7, 2026No Comments4 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email


Elon Musk’s legal efforts to dismantle OpenAI may hinge on how its commercial subsidiary strengthens or undermines Frontier Labs’ founding mission to ensure humanity benefits from artificial general intelligence.

On Thursday, a federal court in Oakland heard testimony from former employees and directors who said the company’s efforts to push AI products to market undermined its commitment to AI safety.

Rosie Campbell joined the company’s AGI readiness team in 2021 and left OpenAI in 2024 after the team was disbanded. Another safety-focused team, the Super Alignment Team, also closed around the same time.

“When I started, there was a lot of emphasis on research, and it was common to talk about AGI and safety issues,” she testified. “Over time, we became more of a product-centric organization.”

In cross-examination, Campbell acknowledged that the institute’s goal of building AGI would likely require significant funding, but said the creation of superintelligent computer models without proper safeguards did not fit with the mission of the organization she first joined.

Campbell pointed to an incident in which Microsoft introduced a version of its GPT-4 model to India through the Bing search engine before it was evaluated by the company’s Deployment Safety Board (DSB). He said that while the model itself did not pose a significant risk, the company “needed to set a strong precedent as technology becomes more powerful. We want to put in place good safety processes that we know will be followed reliably.”

OpenAI’s lawyers also forced Campbell to acknowledge in his “speculative opinion” that OpenAI’s safety approach is better than that of xAI, the AI ​​company founded by Musk and acquired by SpaceX earlier this year.

tech crunch event

San Francisco, California
|
October 13-15, 2026

Although OpenAI has published evaluations of its models and publicly shared its safety framework, the company declined to comment on its current approach to AGI tuning. Current head of preparation, Dylan Scandinaro, was hired from Anthropic in February. Altman said the hire will “help me sleep better tonight.”

But the rollout of GPT-4 in India was one of the red flags that led OpenAI’s nonprofit board to lay off CEO Sam Altman in 2023. The incident came after employees, including then-chief scientist Ilya Satskeva and then-chief technology officer Mira Murati, complained about Altman’s conflict-avoiding management style. Tasha McCauley, a member of the board at the time, testified about concerns that Altman was not active enough to make the board’s unusual structure work.

Mr. McCauley also discussed Mr. Altman’s widely reported pattern of misleading the board. In particular, Mr. Altman lied to another board member about Mr. Macquarie’s intention to fire a third board member, Helen Toner, who published a white paper containing implicit criticism of OpenAI’s safety policies. Mr. Altman also did not notify the board of his decision to launch ChatGPT publicly, and members were concerned about Mr. Altman’s failure to disclose potential conflicts of interest.

“We are a nonprofit board, and our mission was to be able to oversee the for-profit entities below us,” McCauley told the court. “The way we were supposed to do things was being questioned. We had no confidence that the information that was being passed on would allow us to make informed decisions.”

However, the decision to fire Altman coincided with the company’s takeover offer for its employees. McCauley said that as OpenAI staff began to side with Altman and Microsoft tried to restore the status quo, the board eventually reversed course and members who opposed Altman resigned.

The inability of nonprofit boards to influence for-profit organizations is clear, and OpenAI’s transformation from a research organization to one of the world’s largest private companies is directly relevant to Musk’s lawsuit, which alleges that he violated the tacit agreement of the organization’s founders.

David Scissor, a former dean of Columbia Law School who was paid by Musk’s team as an expert witness, echoed McCauley’s concerns.

“OpenAI emphasizes that safety is a key part of its mission, and we intend to prioritize safety over profit,” Scissor said. “Part of that is taking safety regulations seriously. If something needs to be subject to a safety review, it needs to be done. It’s a matter of process.”

AI is already deeply embedded in commercial enterprises, and the problem extends far beyond a single lab. McCauley said OpenAI’s internal governance failures should be a reason to accept stronger government regulation of advanced AI. “If it all comes down to decisions made by one CEO and the public interest is at stake, that’s very suboptimal.”

If you buy through links in our articles, we may earn a small commission. This does not affect editorial independence.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Editor-In-Chief
  • Website

Related Posts

Anthropic raises funding while Clio hits $500 million milestone

May 14, 2026

Who decides what AI communicates? Campbell Brown, former head of news at Meta, thinks:

May 14, 2026

Notion turns your workspace into a hub for AI agents

May 13, 2026
Add A Comment

Comments are closed.

News

China’s Mr. Xi warns President Trump about Taiwan at Beijing summit | Donald Trump News

By Editor-In-ChiefMay 14, 2026

The Chinese leader has warned the US president that disagreements over Taiwan could take relations…

76th day of Iran war: Vance says progress has been made in negotiations. Israeli Pound Lebanon | US-Israel war against Iran News

May 14, 2026

Memphis residents file lawsuit alleging human rights abuses by Trump-backed task force | Donald Trump News

May 14, 2026
Top Trending

Anthropic raises funding while Clio hits $500 million milestone

By Editor-In-ChiefMay 14, 2026

AI is now being applied to everything from healthcare to customer support,…

Who decides what AI communicates? Campbell Brown, former head of news at Meta, thinks:

By Editor-In-ChiefMay 14, 2026

Campbell Brown has spent his career pursuing accurate information, first as a…

Notion turns your workspace into a hub for AI agents

By Editor-In-ChiefMay 13, 2026

Productivity software maker Notion is stepping into the agent era. In a…

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Welcome to WhistleBuzz.com (“we,” “our,” or “us”). Your privacy is important to us. This Privacy Policy explains how we collect, use, disclose, and safeguard your information when you visit our website https://whistlebuzz.com/ (the “Site”). Please read this policy carefully to understand our views and practices regarding your personal data and how we will treat it.

Facebook X (Twitter) Instagram Pinterest YouTube

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Facebook X (Twitter) Instagram Pinterest
  • Home
  • Advertise With Us
  • Contact US
  • DMCA Policy
  • Privacy Policy
  • Terms & Conditions
  • About US
© 2026 whistlebuzz. Designed by whistlebuzz.

Type above and press Enter to search. Press Esc to cancel.