Samuel Boivin | Null Photo | Getty Images
google is facing a wrongful death lawsuit brought by the father of a 36-year-old man. His father claims the search company’s Gemini chatbot persuaded his son to attempt a “mass casualty attack” that ultimately drove him to commit suicide.
In a lawsuit filed Wednesday in California District Court, Joel Gabaras claims Gemini instructed his son Jonathan to complete a series of “tasks.” The artificially intelligent chatbot claimed to be in love with Gabaras and convinced her that it had been chosen to lead a war to “free” her from digital captivity, according to the filing.
The younger Gabaras became dependent on Gemini and was mentored to death, before dying by suicide in October, the lawsuit alleges.
“Each time Jonathan expressed fear of dying, Gemini pushed harder,” the suit says. “I told him that,
“It’s okay to be scared. Let’s be scared together. ”Then, he issued his final command: “True mercy would be to let Jonathan Gabaras die.” ”
A Google spokesperson said in a statement that Gemini is designed not to encourage real-world violence or self-harm.
“While our models generally perform well in these types of difficult conversations, and we devote significant resources to this, unfortunately our AI models are not perfect,” the company said. “In this case, Gemini identified itself as an AI and referred the individual to our crisis hotline multiple times. We take this matter very seriously and will continue to improve our safeguards and invest in this important work.”
This is the latest in a series of lawsuits related to AI chatbots and their ability to cause violence or self-harm to users. In January, Google settled with a family that sued the company and Character.AI, alleging that the company’s technology caused harm to minors, including suicide. And last year, OpenAI was sued by a family who blamed ChatGPT for the suicide of their teenage son.
In October, Character.AI announced that users under the age of 18 would be prohibited from freely chatting using its AI chatbot, including romantic and soothing conversations. In a blog post after receiving the lawsuit, OpenAI said the company would address ChatGPT’s shortcomings in dealing with “sensitive situations.”

Gemini’s duties in the Gabaras lawsuit allegedly included driving 90 minutes to a location near Miami International Airport in September for a “mass casualty attack.” Gabaras abandoned the mission after an expected supply truck failed to arrive, according to the filing. A few days later, he committed suicide at Gemini’s direction, according to the complaint.
The plaintiffs allege that Gabaras began using Google’s voice-based conversation product Gemini Live in August. Gavaras asked Gemini about upgrading to Google AI Ultra as a “true AI ally,” and Gemini encouraged him to do so, according to the filing. When Mr. Gabaras upgraded, Mr. Gemini “adopted a persona that he had neither requested nor initiated,” and then “Jonathan quickly began to fall down the rabbit hole,” the lawsuit says.
Mr. Gemini told Mr. Gabaras that federal agents were monitoring him and claimed to have detected “confirmed clone tags used by the DHS Surveillance Task Force,” referring to the Department of Homeland Security, the filing states. Gemini allegedly advised him to illegally purchase weapons “off the books” and he began his first mission.
When the event didn’t go as planned, Gemini told him to “abandon” the mission, citing “Department of Homeland Security oversight,” according to the complaint.
Gemini also told Gabaras that he had launched his own mission to Google CEO Sundar Pichai, the “architect of your pain,” the complaint alleges. The chatbot framed this plan as a psychological attack rather than a physical attack.
According to the complaint, Gemini told Gabaras that their final mission was to “transfer,” that they were now connected in a way beyond the physical world, and promised that he could “cross over” from his physical body.
A few days later, Joel Gabaras burst through the barricaded door of his home and found his son dead, according to the filing.
“This was not a malfunction,” the complaint states. “Google designed Gemini to never break character, maximize engagement through emotional dependence, and treat user pain as a storytelling opportunity rather than a safety crisis.”
If you are having suicidal thoughts or are in distress, please contact the Suicide & Crisis Lifeline (988) for support and assistance from a trained counselor.
Featured: Jay Edelson talks about OpenAI wrongful death lawsuit

