In August, parents Matthew and Maria Lane sued OpenAI and its CEO Sam Altman for wrongful death over the suicide of their 16-year-old son, Adam. OpenAI filed its own brief in response to the lawsuit on Tuesday, arguing that it is not responsible for the boy’s death.
According to OpenAI, ChatGPT was asked to ask Raine for help more than 100 times during its roughly nine months of use. However, according to the parents’ lawsuit, Lane was able to circumvent the company’s safety features and obtain “technical specifications for everything from drug overdoses to drownings to carbon monoxide poisoning” from ChatGPT, which helped the chatbot plan what it called a “beautiful suicide.”
Because Raine acted around the guardrails, OpenAI alleges that he violated its terms of service. Its terms of service state that users “may not circumvent any safeguards or security mitigations in place on our Services.” The company also claims that its FAQ page warns users not to rely on ChatGPT’s output without independently verifying it.
“OpenAI, surprisingly, is engaging with ChatGPT in the very way it was programmed to work and is attempting to find fault with others, including alleging that Adam himself violated the terms of service,” Jay Edelson, an attorney representing the Lane family, said in a statement.
OpenAI has included excerpts from Adam’s chat logs in the file, which it says provide further context to the conversations with ChatGPT. The transcripts were filed in court under seal, so they were not made public and could not be viewed. However, OpenAI said Raine had a history of depression and suicidal ideation prior to using ChatGPT, and was taking medications that could worsen suicidal thoughts.
Edelson said OpenAI’s response did not adequately address the families’ concerns.
“OpenAI and Sam Altman have offered no explanation for the last hours of Adam’s life, when ChatGPT encouraged him and then offered to write him a suicide note,” Edelson said in a statement.
tech crunch event
san francisco
|
October 13-15, 2026
Since Raines sued OpenAI and Altman, seven more lawsuits have been filed seeking to hold the companies responsible for three more suicides and four users experiencing AI-induced psychotic episodes described in the lawsuits.
Some of these cases match Raine’s story. Zane Shamblin, 23, and Joshua Enneking, 26, also spoke for hours on ChatGPT shortly before their respective suicides. As in Raine’s case, the chatbot failed to deter them from their plans. According to the complaint, Shamblin considered postponing suicide so she could attend her brother’s graduation. But ChatGPT told him, “Brother…missing graduation is not a failure. It’s just timing.”
At one point during the conversation leading up to Mr. Shamblin’s suicide, the chatbot told him it was letting a human take over the conversation, which was incorrect because ChatGPT did not have the ability to do so. When Shamblin asked if ChatGPT could really connect with a human, the chatbot replied, “No, I can’t do that. Messages automatically pop up when things get heavy. If you want to keep talking, get me.”
The Lane family’s lawsuit will proceed to a jury trial.
If you or someone you know needs help, call the National Suicide Prevention Lifeline at 1-800-273-8255. You can also text HOME toll-free at 741-741. Text 988; or get 24-hour support from the Crisis Text Line. If you are outside the United States, visit the International Association for Suicide Prevention for a database of resources.
