When will we take the fate of AI seriously?
Here’s a key breakdown of Elon Musk’s attempt to shut down OpenAI’s commercial AI business. His lawyers argue that the organization was founded as a charity focused on AI safety and has lost its way in pursuit of profit. To prove that, they cite old emails and statements from the organization’s founders about the need for a public-spirited counterweight to Google DeepMind.
Today they called Stuart Russell, the only expert witness who can speak directly about AI technology. Stuart Russell is a computer science professor at the University of California, Berkeley who has researched AI for decades. His job was to provide background on AI and prove that this technology is worryingly dangerous.
In March 2023, Russell signed an open letter calling for a six-month moratorium on AI research. In a sign of the contradiction here, Musk signed the same letter even as he was launching his own for-profit AI lab, xAI.
Russell told jurors and Judge Yvonne Gonzalez Rogers that there are a variety of risks in developing AI, from cybersecurity threats to problems with poor coordination and the winner-take-all nature of artificial general intelligence (AGI) development. Ultimately, he said, there is a tension between the pursuit of AGI and safety.
Russell’s larger concerns about the existential threat of unconstrained AI were not reported in open court, as the judge limited Russell’s testimony over objections from OpenAI’s lawyers. But Russell has long criticized the arms race dynamics created by remote labs around the world vying to be the first to achieve AGI, and called on governments to more tightly regulate the field.
OpenAI’s lawyers spent time cross-examining Mr. Russell, establishing that he did not directly assess the organization’s corporate structure or specific safety policies.
tech crunch event
San Francisco, California
|
October 13-15, 2026
But this reporter (as well as the judge and jury) will be weighing how much weight to place between corporate greed and concerns about the safety of AI. Virtually all of OpenAI’s founders passionately warned about the risks of AI while also emphasizing its benefits, attempting to build out AI as quickly as possible, and fleshed out plans for an AI-focused commercial company that they would control.
Looking from the outside, the obvious problem here is that there has been a growing recognition within the company since OpenAI was founded that organizations simply need more computing spending to be successful. That funding can only come from commercial investors. Fearing that AGI would fall into the hands of a single organization, the founding team sought capital and ultimately tore the team apart, creating the arms race we know today and bringing us to this lawsuit.
The same dynamics are already playing out at the national level, with Sen. Bernie Sanders’ push for legislation to impose a moratorium on data center construction echoing concerns about AI expressed by Elon Musk, Sam Altman, Jeffrey Hinton and others. Hodan Omar, who works at the Center for Data Innovation, an industry group, took issue with Sanders citing fear instead of hope, telling TechCrunch: “It’s unclear why the public should discount everything a tech billionaire says, unless you can marshal their words to fill holes in a dangerous debate.”
Now, both parties in the case are asking the court to take some parts of Altman and Musk’s arguments seriously, but ignore parts that don’t add much to the legal argument.
Correction: Article has been updated to correct the name of Stuart Russell, professor of computer science at the University of California, Berkeley.
If you buy through links in our articles, we may earn a small commission. This does not affect editorial independence.
