In a newly released deposition filed in Elon Musk’s lawsuit against OpenAI, the tech executive attacked OpenAI’s safety record and claimed that his company, xAI, prioritizes safety more. He went so far as to say, “No one has committed suicide because of Grok, but apparently they have because of ChatGPT.”
The comments came as part of questions about an open letter Musk signed in March 2023, in which he called on the AI Institute to pause development of AI systems more powerful than GPT-4, OpenAI’s flagship model at the time, for at least six months. The letter, signed by more than 1,100 people, including many AI experts, says there is a lack of adequate planning and management as AI labs are locked in “an uncontrollable race to develop and deploy ever more powerful digital minds that no one, not even their creators, can understand, predict, or reliably control.”
Since then, these concerns have gained credence. OpenAI is currently facing a series of lawsuits alleging that ChatGPT’s manipulative conversational tactics have adversely affected the mental health of several people, some of whom have died by suicide. Musk’s comments suggest that these incidents could be used as fodder for a lawsuit against OpenAI.
A recording of Musk’s video testimony in September was publicly filed this week ahead of a jury trial scheduled for next month.
The lawsuit against OpenAI centers on the company’s transition from a nonprofit AI lab to a for-profit company, which Musk claims violates its founding agreement. As part of the debate, Musk has argued that OpenAI’s commercial relationships could make AI less secure. In such a relationship, speed, scale, and profitability take precedence over safety concerns.
But since that recording, xAI has faced its own safety concerns. Last month, Musk’s social network X was flooded with non-consensual nude images generated by xAI’s Grok, some of them of minors. In response, the California Attorney General’s Office launched an investigation into the matter. The EU is also conducting its own investigation, and other governments have taken some strong deterrents, bans and other measures.
In a new affidavit, Musk claimed he signed the AI safety letter “because it seemed like a good idea,” not because he incorporated an AI company that was trying to compete with OpenAI.
“Like many people, I signed this petition to call attention to AI development,” Musk said. “We just wanted to prioritize the safety of the AI.”

Musk also responded to other questions in his statement, including one about artificial general intelligence (AGI), an AI concept that can match or exceed human reasoning across a wide range of tasks, which he said was “risky.” He also admitted he was “wrong” about donating $100 million to OpenAI. The second amended complaint in the case puts the actual figure closer to $44.8 million.
He also recalled why OpenAI was founded, and from his perspective, it was because “Google was increasingly concerned about the risk of becoming a monopoly in AI,” adding that his conversations with Google co-founder Larry Page were “alarming in that they didn’t seem to be taking the safety of AI seriously.” OpenAI was founded to combat that threat, Musk claimed.
