The UK’s Online Safety Act was aimed at keeping children safe. Rather, they are leaving the public uninformed. Within days of the law coming into force in late July 2025, X (formerly Twitter) began hiding videos of Israeli atrocities in Gaza from the UK’s timeline behind content warnings and age restrictions. The law, sold as a safeguard, became one of the most effective censorship tools Britain has ever constructed. What is happening now is no coincidence. This is the result of laws that normalize censorship, identity checks, and online surveillance, armed with child protection rhetoric.
The roots of Britain’s online censorship crisis go back nearly a decade to MindGeek, the scandal-plagued company behind Pornhub (now rebranded as Aylo). This tax-evasive and exploitative porn empire worked closely with the British government to develop an age verification system called AgeID. The plan was to effectively give Airo a monopoly on legal adult content, either by forcing smaller competitors to pay up or disappear. A public backlash killed AgeID in 2019, but the idea lived on. A precedent has been set when a democracy embraces the idea that access to online content should be limited by identity verification. The foundation was laid by the Digital Economy Act 2017 and brought into law by the Online Safety Act 2023. Several European Union countries, including France and Germany, are currently considering similar legislation that incorporates the same “protecting children” rhetoric. This is not a conspiracy. It is a natural convergence of corporate capture and state control, wrapped in the moral language of child safety.
The Online Safety Act gives Ofcom powers to police almost every corner of the internet, from social media and search engines to adult content platforms, with threats of fines of up to £18 million ($24 million) or 10% of global revenue. Platforms may be designated as “Category 1” services, subject to the strictest rules, including mandatory age verification, verifying the identity of posters, and removing vaguely defined “harmful” material. Wikipedia currently faces this very threat. In August 2025, the High Court rejected the Wikimedia Foundation’s challenge to the classification rules, paving the way for Ofcom to treat the Wikimedia Foundation as a high-risk platform. The foundation warned that compliance would put volunteer editors at risk by forcing them to censor sensitive information and linking their real identities to their writing. If the UK refuses, it could theoretically have the legal power to block access entirely, but this is an alarming example of how ‘child protection’ can become a means of controlling information. Ofcom has already launched several investigations into major porn sites and social networks for alleged compliance violations. The chilling effect of this law is no longer hypothetical. It is operational.
Age verification systems are fundamentally incompatible with privacy and security. In fact, any identity verification system should immediately raise suspicion. The July 25 breach of the Tea dating app exposed thousands of photos and more than 13,000 confidential ID documents, which were spread on 4chan. Or, more recently, the Discord data breach, where a third-party service was hacked and over 70,000 government ID documents were exposed, proved the point.
When systems store verification data linking real identities to online activities, they create a gold mine for hackers, extortionists, and nation states. History has already sounded the alarm, from the 2013 Brazzers breach of nearly 800,000 accounts to the FBI’s finding that porn-related exposure scams remain one of the leading categories of online extortion. Imagine if this infrastructure applied not only to adult content, but also to political speech, journalism, and activism. The same tools built for “child safety” enable unprecedented intimidation and political manipulation. A single breach can expose journalists, whistleblowers, and public officials. And in a world where data frequently crosses borders, there is no guarantee that democracies’ verification databases will not fall into the hands of authoritarians. The more we digitize “trust,” the more it is at risk.
The most insidious feature of this legislative trend is how it exempts parents while empowering the state. Existing parental control tools are sophisticated and parents can already monitor and restrict their children’s internet usage through devices, routers, and apps. The push for government-mandated age verification does not mean those tools will fail. It’s about some parents choosing not to use them and the government using that inaction as an excuse for surveillance. Rather than investing in education and digital literacy, authorities are expanding their powers to decide what everyone can see. A nation should not educate its citizens. But under online safety laws, every citizen is a suspect and must prove their innocence before saying or reading anything online. What is advocated under the framework of “protecting children” is actually the creation of a compliance system for the entire nation.
Britain’s disastrous experiment is already spreading. France and Germany are working on parallel draft age verification and online safety laws, while the European Union’s age verification blueprint would link access to adult content and “high-risk” platforms to interoperable digital IDs. The EU claims the system protects privacy, but its architecture is identical to the UK model: comprehensive identity verification disguised as protection. The logic repeats itself everywhere. The law began with the narrow goal of protecting minors from pornography, but its powers quickly expanded to include first protests and then politics. Today, it’s Gaza videos and sexual content. Tomorrow it’s journalism or opposition. The UK is not an outlier, but a template for digital authoritarianism exported under the banner of security.
Supporters of these laws argue that we face a choice: adopt universal age verification or abandon our children to the dangers of the Internet. However, this composition is disingenuous. No technology system can replace engaged parenting and digital literacy education. Determined teenagers will still find a way to access adult content, but they will only be relegated to the dark corners of the web. Meanwhile, the law does little to stop the real threat of child sexual abuse material circulating on encrypted or hidden networks that never comply with regulations. In fact, the only sites that are following the rules are those that are already able to police themselves, and those are the very sites that are currently being undermined by the state. By directing young people to VPNs and unregulated platforms, lawmakers risk exposing them to far greater harm. As a result, you will be less safe and more exposed.
Strip away the child protection rhetoric and the true function of online safety laws becomes clear. It’s about building infrastructure for mass content management and population monitoring. Once these systems exist, it’s easy to scale. I’ve seen this logic before. Anti-terrorism laws have been transformed into a means of cracking down on dissent. “Child safety” now covers the same authoritarian creeps. The EU is already considering proposals such as mandatory chat scanning and weakening of encryption, and has pledged that such measures will only be used against abusers until they inevitably fall into the wrong hands. The immediate effects in the UK, such as restricting footage of Gaza, threatening access to Wikipedia, and censoring protest videos, are not glitches. These are previews of digital orders built on controls. What is at stake is not just privacy, but democracy itself: the right to speak, know, and disagree without first being tested.
We don’t need to build a surveillance state to protect our children online. It requires education, accountability, and support for parents, teachers, and platforms alike. Governments need to invest in digital literacy, debunk real online exploitation, and give parents better tools to control access. Platforms must adhere to clear standards of transparency and algorithmic accountability, rather than forcing adults to police them. If self-regulation does not work, targeted monitoring will work, but universal verification will not.
The UK’s Online Safety Act and similar laws around the world represent a fundamental choice about the kind of digital future we want. We can accept the false promise of security through surveillance and control, or we can insist on solutions that protect children without sacrificing the values of privacy, freedom, and democracy that make protection worthwhile in the first place. The early results in the UK should serve as a warning rather than a model. Before this macabre act of authoritarianism becomes irreversible, the public and lawmakers must realize that when governments claim to protect children by controlling information, they are usually protecting something else entirely: the government’s own power to determine what we can see, say, and know.
The views expressed in this article are the author’s own and do not necessarily reflect Al Jazeera’s editorial policy.
