Close Menu
  • Home
  • AI
  • Art & Style
  • Economy
  • Entertainment
  • International
  • Market
  • Opinion
  • Politics
  • Sports
  • Trump
  • US
  • World
What's Hot

Sen. Kerry sues Pentagon chief, says Hegseth was punished for ‘adverse political comments’

January 12, 2026

Elizabeth Warren worries that workers will suffer ‘huge losses’ in 401(k) cryptocurrencies

January 12, 2026

Countries that trade with Iran will be subject to a 25% tariff.

January 12, 2026
Facebook X (Twitter) Instagram
WhistleBuzz – Smart News on AI, Business, Politics & Global Trends
Facebook X (Twitter) Instagram
  • Home
  • AI
  • Art & Style
  • Economy
  • Entertainment
  • International
  • Market
  • Opinion
  • Politics
  • Sports
  • Trump
  • US
  • World
WhistleBuzz – Smart News on AI, Business, Politics & Global Trends
Home » New Jersey lawsuit shows how difficult it is to combat deepfake porn
AI

New Jersey lawsuit shows how difficult it is to combat deepfake porn

Editor-In-ChiefBy Editor-In-ChiefJanuary 12, 2026No Comments5 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email


For more than two years, an app called ClothOff has been terrorizing young women online, and it’s been extremely difficult to stop. The app has been removed from two major app stores and banned from most social platforms, but is still available on the web and through Telegram bots. In October, Yale Law School’s clinic filed a lawsuit demanding that the app be permanently removed, its owners remove all images, and operations cease completely. But just finding the defendant was difficult.

“This organization is incorporated in the British Virgin Islands,” explains Professor John Langford, co-lead solicitor on the case, “but we believe it is run by the Brothers and Sisters and Belarus. It may even be part of a larger network around the world.”

This is a bitter lesson after the recent flood of non-consensual pornography generated by Elon Musk’s xAI and involving many underage victims. Child sexual abuse content is some of the most legally harmful content on the internet, is illegal to create, transmit, and store, and is regularly scanned by all major cloud services. But despite strict legal prohibitions, there are still few ways to deal with image generation tools like ClothOff, as Langford’s case shows. While individual users can be prosecuted, platforms like ClothOff and Grok are much harder to police, leaving victims who want to find justice in court with few options.

The clinic’s complaint, available online, paints an alarming picture. The plaintiff is an anonymous high school student in New Jersey whose classmates used ClothOff to alter her Instagram photos. She was 14 years old when the original Instagram photo was taken. This means that the AI-enhanced version would be legally classified as a child abuse image. However, despite the altered images being clearly illegal, local authorities declined to prosecute the case, citing difficulty in obtaining evidence from the suspect’s device.

“Neither the school nor law enforcement agencies have disclosed how widely Jane Doe and the other girls’ CSAM was distributed,” the complaint states.

Still, the trial is moving slowly. The complaint was filed in October, and in the months since then Langford and his colleagues have been working to serve notices on the defendants, a difficult task given the global nature of the companies. Once served, the clinic can ask for a court appearance and ultimately a verdict, but in the meantime, the legal system offers little comfort to ClothOff victims.

Grok’s case may seem like an easier problem to solve. Elon Musk’s xAI is not hidden, and the lawyers who can win the case will ultimately have enough money. However, Grok is a general-purpose tool, which makes it very difficult to hold it accountable in court.

tech crunch event

san francisco
|
October 13-15, 2026

“ClothOff is specifically designed and marketed as a deepfake porn image and video generator,” Langford told me. “Litigation becomes even more complex when you litigate a general system that allows users to perform all kinds of queries.”

Many laws in the United States already prohibit deepfake pornography, the most notable being the Take It Down Act. But while it’s clear that specific users are violating these laws, it’s much harder to hold the platform as a whole accountable. Current law requires clear evidence of intent to harm, which means xAI must provide evidence that it knew its tools would be used to produce non-consensual pornography. Absent that evidence, xAI’s fundamental First Amendment rights would provide important legal protection.

“When it comes to the First Amendment, it’s clear that child sexual abuse material is not protected speech,” Langford said. “So if you’re designing a system that creates that kind of content, it’s clear that you’re operating outside of First Amendment protections. But for a general system where users can run all kinds of queries, it’s less clear.”

The easiest way to overcome these problems is to show that xAI intentionally ignored the problem. That could very well be the case, given recent reports that Musk told employees to loosen safety equipment on Grok. But even if it were, it would be a much riskier affair to take on.

“Reasonable people can tell you that we’ve known this was a problem for years,” Langford said. “Couldn’t there have been stricter controls in place to prevent something like this from happening? That’s kind of reckless or knowledgeable, but it’s just a more complicated case.”

These First Amendment issues are why xAI’s greatest backlash has come from a court system that lacks strong legal protections for free speech. Indonesia and Malaysia have taken steps to block access to the Grok chatbot, and UK regulators have launched an investigation that could lead to a similar ban. The European Commission, France, Ireland, India and Brazil have also taken other preliminary steps. In contrast, U.S. regulators have not issued an official response.

It’s impossible to say how the investigation will resolve, but at the very least, the large number of images raises many questions for regulators to investigate, and the answers could be damning.

“If you post, distribute or disseminate material related to child sexual abuse, you are violating criminal prohibitions and could be held liable,” Langford said. “The hard question is: What did X know? What did X do and what didn’t they do? What are they doing about it now?”



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Editor-In-Chief
  • Website

Related Posts

Anthropic’s new Cowork tool gives you Claude Code without the code

January 12, 2026

Following the launch of OpenAI’s ChatGPT Health, Anthropic announces Claude for Healthcare

January 12, 2026

Google’s Gemini powers Apple’s AI features such as Siri

January 12, 2026
Add A Comment

Comments are closed.

News

Trump administration says military attack on Iran is still under consideration | Donald Trump News

By Editor-In-ChiefJanuary 12, 2026

The White House has said the United States may launch airstrikes against Iran, but prefers…

Greenland rejects US takeover ‘under no circumstances’ | Greenland Donald Trump News

January 12, 2026

President Trump to meet with Venezuelan opposition leader Machado on Thursday News on US-Venezuela tensions

January 12, 2026
Top Trending

Anthropic’s new Cowork tool gives you Claude Code without the code

By Editor-In-ChiefJanuary 12, 2026

On Monday, Anthropic announced a new tool called Cowork, designed as a…

Following the launch of OpenAI’s ChatGPT Health, Anthropic announces Claude for Healthcare

By Editor-In-ChiefJanuary 12, 2026

Following the rollout of OpenAI’s ChatGPT Health, Anthropic on Sunday announced the…

Google’s Gemini powers Apple’s AI features such as Siri

By Editor-In-ChiefJanuary 12, 2026

It’s official. Apple has chosen to work with long-time partner Google to…

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Welcome to WhistleBuzz.com (“we,” “our,” or “us”). Your privacy is important to us. This Privacy Policy explains how we collect, use, disclose, and safeguard your information when you visit our website https://whistlebuzz.com/ (the “Site”). Please read this policy carefully to understand our views and practices regarding your personal data and how we will treat it.

Facebook X (Twitter) Instagram Pinterest YouTube

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Facebook X (Twitter) Instagram Pinterest
  • Home
  • Advertise With Us
  • Contact US
  • DMCA Policy
  • Privacy Policy
  • Terms & Conditions
  • About US
© 2026 whistlebuzz. Designed by whistlebuzz.

Type above and press Enter to search. Press Esc to cancel.