This photo taken on February 2, 2024 shows Lu Yu, head of product management and operations for OneTalk, an artificial intelligence chatbot developed by Chinese tech company Baidu, showing the profile of his virtual girlfriend on his mobile phone at Baidu’s headquarters in Beijing.
Jade Gao | AFP | Getty Images
BEIJING – China plans to restrict artificial intelligence-powered chatbots from influencing human emotions in ways that could lead to suicide or self-harm, according to draft regulations released on Saturday.
The regulations proposed by the Cyberspace Administration target what the agency calls “human-like conversational AI services,” according to a translation of a Chinese document by CNBC.
Once completed, this measure will apply to any AI product or service generally available in China that simulates human personality and touches users’ emotions through text, images, audio, and video. The public comment period ends on January 25th.
Winston Ma, an adjunct professor at New York University School of Law, said the Chinese government’s planned rules would be the world’s first attempt to regulate AI with human or anthropomorphic characteristics. The latest proposal comes as Chinese companies are rapidly developing AI companions and digital celebrities.
Compared to China’s 2023 Generative AI Regulations, this version “emphasizes a leap from content safety to emotional safety,” Ma said.
The draft regulations propose the following:
AI chatbots cannot generate content that encourages suicide or self-harm, or engage in verbal abuse or emotional manipulation that harms users’ mental health. If a user specifically suggests suicide, the technology provider should have a human take over the conversation and immediately contact the user’s guardian or designated individual. AI chatbots must not generate gambling-related, obscene, or violent content. Minors need parental consent when using AI for emotional interactions. There are restrictions on usage time. The platform will assume that you are a minor even if you do not disclose your age, and will apply minor settings when in doubt, but you can also appeal them.

Additional provisions would require technology providers to remind users after two hours of continuous interaction with an AI, and to require security assessments of AI chatbots with more than 1 million registered users or 100,000 monthly active users.
The document also encouraged the use of human-like AI in “cultural dissemination and elderly interaction.”
Chinese AI chatbot IPO
The proposal was announced shortly after two of China’s leading AI chatbot startups, Z.ai and Minimax, filed for initial public offerings in Hong Kong this month.
Minimax is best known internationally for its Talkie AI app, which allows users to chat with virtual characters. The app and its Chinese version, Xingye, accounted for more than a third of the company’s revenue in the first three quarters of this year, and averaged more than 20 million monthly active users during the period.
Z.ai, also known as Zhipu, filed under the name “Knowledge Atlas Technology.” The company did not disclose its monthly active users, but said its technology “powers” about 80 million devices, including smartphones, computers and smart cars.
Neither company responded to CNBC’s requests for comment on how the proposed rules could affect their IPO plans.
