Photo: Jaromir Chalabala/Getty
British regulators are calling on social media giants to implement stricter protections for children on their platforms after lawmakers rejected a blanket ban on under-16s.
Online safety group Ofcom and the Information Commissioner’s Office announced Thursday that they have written to YouTube, TikTok, Facebook, Instagram and Snapchat, asking them to address a range of child safety issues, from introducing strict age verification measures to tackling child grooming on their platforms.
The move comes after British MPs voted against a proposal to include a social media ban for under-16s in the Child Welfare Bill being debated earlier this month.
The UK government has launched a consultation on children’s social media use to gather views from parents and young people on whether social media bans are effective.
European governments are considering tightening regulations to restrict young people’s use of social media after Australia became the first country to impose a blanket ban on under-16s in December. Spain, France and Denmark are also considering similar measures.
Better age verification technology
Ofcom has written to social media platforms asking them to report on what efforts they are taking to keep children off their platforms, giving them a deadline of April 30th to respond.
The demands included tightening minimum age requirements, preventing strangers from contacting children, making content safer for teens, and eliminating testing of products such as AI on children.
Ofcom chief executive Melanie Dawes said the tech giants have “failed to put child safety at the heart of their products” and are failing to deliver on their promises to keep children safe online.
“Without appropriate protections, including effective age verification, children are routinely exposed to unselected risks by services they cannot realistically avoid,” Dawes said.
The ICO published an open letter on Thursday, saying social media platforms should use facial age estimation, digital ID or one-off photo matching to better authenticate age.
The regulator said many platforms rely on “self-declaration” as the main way to verify a user’s age, which is “easily circumvented” and ineffective.
“This puts people under the age of 13 at risk as their information is collected and used unlawfully without the protection they deserve,” ICO CEO Paul Arnold said in the letter.
“As public concerns continue to rise, the current situation does not work and the industry must do more to protect children. We must act now to identify and implement currently viable technologies to ensure that children under the minimum age cannot access services,” Arnold added.
Meta initially blocked more than 500,000 accounts believed to be under the age of 16 from Instagram, Facebook, and Threads in compliance with Australia’s social media ban. But they urged the Australian government to reconsider, saying a blanket ban would allow teenagers to circumvent the law and access social media sites without necessary safeguards.
Instagram has announced that it will warn parents if their teens repeatedly search for terms such as suicide and self-harm within a short period of time.
A groundbreaking lawsuit brought against Meta And Alphabet launched in January, focusing on young women and their mothers who say Instagram and YouTube have design features that contribute to addiction.
Meta CEO Mark Zuckerberg and Instagram CEO Adam Mosseri have already testified, and results are expected to be announced in mid-March. The case could set a precedent for what responsibilities social media companies have toward their youngest users.
In January, the European Commission launched an investigation into the spread of sexually explicit material about children by Elon Musk’s Grok, an AI chatbot from Company X. Additionally, ICOs reddit in February for unlawfully processing children’s personal data.
High-tech companies say
A Meta spokesperson told CNBC in a statement that the company has already implemented certain measures outlined by regulators, including using “AI and facial age estimation technology to detect a user’s age based on activity.”
A separate teen account with built-in protections is also available, a spokesperson said. “With teenagers using an average of 40 apps per week, we believe the most effective way to complement our unique age-guaranteed approach is to centrally verify age at the app store level,” they added.
TikTok said it rolled out enhanced technology across Europe starting in January to detect and remove accounts under the minimum age requirement of 13, with the help of professional moderators.
The company said it also uses facial age estimation, credit card verification, or government-approved identification to verify a user’s age.
Snapchat and YouTube did not immediately respond to requests for comment from CNBC.
