Amid growing concerns about the impact of AI on young people, OpenAI has introduced an “age prediction” feature to ChatGPT designed to help identify minors and impose sensible content limits on conversations.
OpenAI has been heavily criticized in recent years for the impact ChatGPT has on children. Many teenage suicides have been linked to this chatbot, and like other AI vendors, OpenAI has been criticized for allowing ChatGPT to discuss sexual topics with young users. Last April, the company had to address a bug that allowed chatbots to generate erotica for users under 18.
The company has already been working on the issue of underage users for some time, and the new “age prediction” feature just adds to the protections it already has in place. OpenAI said in a blog post on Tuesday that the new feature leverages AI algorithms that evaluate user accounts for certain “behavioral and account-level signals” to identify younger users.
The company says these “signals” include things like the user’s stated age, how long the account has been around, and when the account is typically active. The company has already implemented content filters designed to eliminate discussion of sex, violence, and other potentially problematic topics for users under 18. If our age prediction mechanism identifies your account as under 18, those filters are automatically applied.
If a user is mistakenly designated as a minor, there is a way to re-establish an “adult” account. OpenAI says they can send selfies to Persona, OpenAI’s identity verification partner.
