Tesla CEO Elon Musk attends the US-Saudi Investment Forum held at the Kennedy Center in Washington, DC, USA on Wednesday, November 19, 2025.
Bloomberg | Bloomberg | Getty Images
Elon Musk has once again sounded the alarm about the dangers of AI, naming what he sees as the three most important elements to ensure a positive future for the technology.
Billionaire CEO teslaSpaceX, xAI, X, and The Boring Company appeared on a podcast with Indian billionaire Nikhil Kamath on Sunday.
“AI does not guarantee a positive future,” Musk said on the podcast. “When you create powerful technology, you run the risk of it being potentially destructive.”
Musk was a co-founder of OpenAI with Sam Altman, but after stepping down from the board in 2018 and launching ChatGPT in 2022, he publicly criticized the company for abandoning its founding mission as a nonprofit to safely develop AI. Musk’s xAI developed its own chatbot, Grok, in 2023.
Musk has previously warned that “one of the greatest risks to the future of civilization is AI,” stressing that rapid advances make AI a greater risk to society than cars, planes, or medicine.
In the podcast, the tech billionaire emphasized the importance of AI technology seeking truth rather than repeating inaccuracies. “That could be very dangerous,” Musk told Kamath, who is also the co-founder of retail brokerage firm Zerodha.
“Truth, beauty and curiosity. I think these are the three most important things for AI,” he says.
He said that if AI does not strictly follow the truth, it will learn information from online sources and “will absorb a lot of lies, and those lies will be difficult to reason about because they are incompatible with reality.”
He added: “If you force it to believe something that isn’t true, it can go crazy, because that too can lead to bad conclusions.”
“Illusions,” or inaccurate or misleading responses, are a major challenge facing AI. Earlier this year, an AI feature that Apple included in its iPhone generated a fake news alert.
These included an erroneous summary from a BBC News app notification about an article about the PDC World Darts Championship semi-final, incorrectly claiming that British darts player Luke Littler had won the championship. Littler did not win the tournament final until the next day.
Apple told the BBC at the time that it was working on an update that would make Apple Intelligence accountable for the text that appears in notifications.
“It’s important to have some understanding of beauty,” Musk said, adding that “you know it when you see it.”
Musk said AI should want to know more about the nature of reality because humans are more interesting than machines.
“It’s more interesting to see the continuation, if not the flourishing, of humanity than to wipe it out,” he said.
Jeffrey Hinton, a computer scientist and former Google vice president known as the “Godfather of AI,” said on an episode of the Diary of a CEO podcast earlier this year that there was a “10% to 20%” chance that AI would “wipe out humanity.” Short-term risks he cited included hallucinations and automation of entry-level jobs.
“The hope is that if enough smart people do enough research with enough resources, we’ll figure out a way to build them so that they never want to harm us,” Hinton added.
