For more than two years, an app called ClothOff has been terrorizing young women online, and it’s been extremely difficult to stop. The app has been removed from two major app stores and banned from most social platforms, but is still available on the web and through Telegram bots. In October, Yale Law School’s clinic filed a lawsuit demanding that the app be permanently removed, its owners remove all images, and operations cease completely. But just finding the defendant was difficult.
“This organization is incorporated in the British Virgin Islands,” explains Professor John Langford, co-lead solicitor on the case, “but we believe it is run by the Brothers and Sisters and Belarus. It may even be part of a larger network around the world.”
This is a bitter lesson after the recent flood of non-consensual pornography generated by Elon Musk’s xAI and involving many underage victims. Child sexual abuse content is some of the most legally harmful content on the internet, is illegal to create, transmit, and store, and is regularly scanned by all major cloud services. But despite strict legal prohibitions, there are still few ways to deal with image generation tools like ClothOff, as Langford’s case shows. While individual users can be prosecuted, platforms like ClothOff and Grok are much harder to police, leaving victims who want to find justice in court with few options.
The clinic’s complaint, available online, paints an alarming picture. The plaintiff is an anonymous high school student in New Jersey whose classmates used ClothOff to alter her Instagram photos. She was 14 years old when the original Instagram photo was taken. This means that the AI-enhanced version would be legally classified as a child abuse image. However, despite the altered images being clearly illegal, local authorities declined to prosecute the case, citing difficulty in obtaining evidence from the suspect’s device.
“Neither the school nor law enforcement agencies have disclosed how widely Jane Doe and the other girls’ CSAM was distributed,” the complaint states.
Still, the trial is moving slowly. The complaint was filed in October, and in the months since then Langford and his colleagues have been working to serve notices on the defendants, a difficult task given the global nature of the companies. Once served, the clinic can ask for a court appearance and ultimately a verdict, but in the meantime, the legal system offers little comfort to ClothOff victims.
Grok’s case may seem like an easier problem to solve. Elon Musk’s xAI is not hidden, and the lawyers who can win the case will ultimately have enough money. However, Grok is a general-purpose tool, which makes it very difficult to hold it accountable in court.
tech crunch event
san francisco
|
October 13-15, 2026
“ClothOff is specifically designed and marketed as a deepfake porn image and video generator,” Langford told me. “Litigation becomes even more complex when you litigate a general system that allows users to perform all kinds of queries.”
Many laws in the United States already prohibit deepfake pornography, the most notable being the Take It Down Act. But while it’s clear that specific users are violating these laws, it’s much harder to hold the platform as a whole accountable. Current law requires clear evidence of intent to harm, which means xAI must provide evidence that it knew its tools would be used to produce non-consensual pornography. Absent that evidence, xAI’s fundamental First Amendment rights would provide important legal protection.
“When it comes to the First Amendment, it’s clear that child sexual abuse material is not protected speech,” Langford said. “So if you’re designing a system that creates that kind of content, it’s clear that you’re operating outside of First Amendment protections. But for a general system where users can run all kinds of queries, it’s less clear.”
The easiest way to overcome these problems is to show that xAI intentionally ignored the problem. That could very well be the case, given recent reports that Musk told employees to loosen safety equipment on Grok. But even if it were, it would be a much riskier affair to take on.
“Reasonable people can tell you that we’ve known this was a problem for years,” Langford said. “Couldn’t there have been stricter controls in place to prevent something like this from happening? That’s kind of reckless or knowledgeable, but it’s just a more complicated case.”
These First Amendment issues are why xAI’s greatest backlash has come from a court system that lacks strong legal protections for free speech. Indonesia and Malaysia have taken steps to block access to the Grok chatbot, and UK regulators have launched an investigation that could lead to a similar ban. The European Commission, France, Ireland, India and Brazil have also taken other preliminary steps. In contrast, U.S. regulators have not issued an official response.
It’s impossible to say how the investigation will resolve, but at the very least, the large number of images raises many questions for regulators to investigate, and the answers could be damning.
“If you post, distribute or disseminate material related to child sexual abuse, you are violating criminal prohibitions and could be held liable,” Langford said. “The hard question is: What did X know? What did X do and what didn’t they do? What are they doing about it now?”
