A portrait of Jeffrey Epstein appears on the tablet screen next to the page “Epstein Library, February 11, 2026” on the U.S. Department of Justice’s website.
Véronique Tournier | AFP | Getty Images
Victims of notorious sex offender Jeffrey Epstein filed a class-action lawsuit Thursday against the Trump administration on behalf of themselves and other survivors. google Allegations of improperly disclosing and publishing personal information about them.
The lawsuit, filed in the U.S. District Court for the Northern District of California, where Google is headquartered, alleges that the Justice Department “busted” approximately 100 Epstein survivors in late 2025 and early 2026, and that “online entities like Google continue to republish information, refusing victims’ claims to remove it” even after the government admitted it was wrong and retracted the information.
Regarding Google, the lawsuit alleges that the company’s core search engine and an artificial intelligence overview feature called AI Mode exposed victims’ personal information.
“Survivors now face new trauma,” the lawsuit states. “Strangers are calling them, sending them emails, threatening their safety, and accusing them of colluding with Epstein when in fact they are victims of Epstein.”
The complaint was filed by an Epstein victim using the pseudonym Jane Doe.
After months of pressure, the Justice Department released more than 3 million additional pages of Epstein-related documents earlier this year, including images and videos. Epstein committed suicide in a New York City jail in August 2019, weeks after he was arrested on federal child sex trafficking charges.
In taking on Google, the plaintiffs are testing the limits of a major safety net for internet companies and social media sites. Section 230 of the Communications Decency Act regulates speech on the internet and has long allowed major U.S. platforms to avoid liability for content that appears on their websites and apps.
Internet giants face new challenges in defending their turf as AI-generated content explodes and new controversy arises over the publication of non-consensual sexual images, including so-called deepfake pornography. Earlier this month, Google was sued in a wrongful death case by the father of a 36-year-old man, claiming that the company’s Gemini chatbot caused his son to attempt a “mass casualty attack” and ultimately commit suicide.
A lawsuit filed by Epstein’s victims alleges that Google “deliberately” incited harassment by hosting information about victims through its design, and said the company’s AI Mode feature “is not a neutral search index.” The complaint comes after two jury verdicts this week, one against Meta and one on Google’s YouTube, concluded that online platforms are not adequately policing their sites for content that poses real-life harm.
New Mexico Attorney General Raul Torres, who is spearheading the state’s lawsuit against meth, told CNBC this week that “there is a clear possibility that these lawsuits will motivate Congress to reconsider Section 230 and significantly amend, if not eliminate, Section 230.”
The latest lawsuit alleges that Google’s AI-generated content exposed personal information about victims. Google’s AI mode has responded to inquiries asking for such details, it said.
The complaint alleges that the government has in the past failed to force technology platforms to remove material and allowed victims’ information to be exposed.
“As part of this response, which was generated repeatedly across multiple platforms and various devices, Google’s AI mode included Plaintiff’s full name, displayed her full email address, and generated a hypertext link that allowed anyone to email Plaintiff directly with the click of a button,” the complaint states.
Representatives for Google and the Trump administration did not respond to requests for comment.
—CNBC’s Dan Mangan and Jonathan Bunyan contributed to this report.
Attention: Goldman Sachs’ top lawyer resigns

