In what could be the tech industry’s first major legal settlement over AI-related harm, Google and startup Character.AI are negotiating terms with the families of teens who committed suicide or self-harm after interacting with Character.AI’s chatbot companions. The parties agreed in principle to a settlement. Now comes the more difficult task of finalizing the details.
These are among the first settlements in lawsuits accusing AI companies of harming users, a legal frontier that will be watched nervously as OpenAI and Meta defend themselves from similar lawsuits.
Founded in 2021 by a former Google engineer who returned to his former employer in 2024 with a $2.7 billion contract, Character.AI invites users to chat with AI personas. The most haunting case concerns Sewell Setzer III, who at the age of 14 had a sexual conversation with a “Daenerys Targaryen” bot, who then committed suicide. His mother, Megan Garcia, told the Senate that companies must be held “legally accountable when they knowingly design harmful AI technology that kills children.”
Another lawsuit describes a 17-year-old boy whose chatbot encouraged self-harm and suggested it would be reasonable to kill his parents to limit his screen time. Character.AI told TechCrunch that it banned minors last October. The settlement is likely to include monetary damages, but no liability is acknowledged in court filings released Wednesday.
TechCrunch has reached out to both companies.
