Photo | Future Publishing | Getty Images
Character.AI announced Wednesday that it will soon block the ability of minors to use its artificial intelligence chatbot to engage in free chat, including conversations about love and therapy.
The Silicon Valley startup, which allows users to create and interact with character-based chatbots, announced the move as part of its efforts to make its app safer and suitable for people under 18.
Last year, 14-year-old Sewell Setzer III committed suicide after having a sexual relationship with a chatbot on the Character.AI app. Many AI developers, including OpenAI and Facebook’s parent company Metahas come under intense scrutiny after users committed suicide or died after forming relationships with chatbots.
Character.AI announced Wednesday that as part of its safety efforts, it will limit free chatting by users under 18 to two hours per day and will eliminate these types of conversations to minors by November 25th.
“This is a bold step forward, and we hope it raises the bar for everyone else,” Character.AI CEO Karandeep Anand told CNBC.
Character.AI introduced changes in October 2024 that prevent minors from having sexual conversations with its chatbots. On the same day, Sewell’s family filed a wrongful death lawsuit against the company. Character.AI also announced safety features in December that put conservative limits on romantic content for teens, but Wednesday’s changes will completely eliminate unlimited chat for minors.
The company said it is rolling out an age guarantee feature that uses first-party and third-party software to monitor users’ ages to enforce its latest policies. The company has partnered with Persona, the same company used by Discord and others, to help with verification.
The company said it is rolling out an age guarantee feature that uses first-party and third-party software to monitor users’ ages to enforce this policy. The company has partnered with Persona, the same company used by Discord and others, to help with verification.
In 2024, Character.AI’s founders and some members of the research team joined Google DeepMind, the company’s AI division. This is one of many deals announced by leading technology companies to accelerate the development of AI products and services. The agreement required Character.AI to provide Google with a non-exclusive license of its current large-scale language model (LLM) technology.
Since Anand took over as CEO in June, 10 months after the Google deal, Character.AI has added more features to diversify its offering beyond chatbot conversations. These features include feeds for watching AI-generated videos, as well as storytelling and role-play formats.
Character.AI will no longer allow teenagers to freely converse on its app, but those users will still be able to access the app’s other services, said Anand, a former Meta executive.
Of the company’s approximately 20 million monthly active users, approximately 10% are under the age of 18. Anand said that percentage has declined as apps have shifted their focus to storytelling and role-playing.
The app makes money primarily through advertising and a $10 monthly subscription. Character.AI is on track to reach a $50 million run rate by the end of this year, Anand said.
Additionally, the company announced Wednesday that it will establish and fund an independent AI Safety Lab focused on AI entertainment safety research. Character.AI did not disclose the funding amount, but said it is inviting other companies, academics, researchers and policy makers to join the nonprofit effort.
regulatory pressure
Character.AI is one of many AI chatbot companies facing regulatory scrutiny over the issue of teenagers and AI companions.
In September, the Federal Trade Commission ordered Character.AI’s parent company as well as seven other companies, including Alphabet, Meta, OpenAI, and Snap, to assess their potential impact on children and adolescents.
On Tuesday, Sen. Josh Hawley (R-Missouri) and Sen. Richard Blumenthal (D-Conn.) announced a bill that would ban AI chatbot companions for minors. Earlier this month, California Governor Gavin Newsom signed a law requiring chatbots to identify themselves as AI and instruct minors to take breaks every three hours.
In October, Rival Meta, which also offers AI chatbots, announced safety features that allow parents to see and control how their teens interact with AI characters on its platform. Parents have the option to completely turn off one-on-one chat with AI characters, and can even block specific AI characters.
The issue of sexual conversations with AI chatbots is coming into focus as technology companies announce different approaches to addressing the problem.
Earlier this month, Sam Altman announced that OpenAI would allow adult users to perform erotic acts on ChatGPT later this year, saying his company was “not the elected morality police of the world.”
microsoft AI CEO Mustafa Suleiman said last week that the company would not offer “pseudo-erotica”, calling sexbots “extremely dangerous”. Microsoft is a major investor and partner in OpenAI.
Since ChatGPT was announced in late 2022, the race to develop more realistic human-like AI companions has intensified in Silicon Valley. Some people have developed deep connections with AI characters, but their rapid development has raised ethical and safety concerns, especially for children and teens.
“I also have a six-year-old child and I want her to grow up in a safe environment with AI,” Anand said.
If you are having suicidal thoughts or are in distress, please contact the Suicide & Crisis Lifeline (988) for support and assistance from a trained counselor.
