Close Menu
  • Home
  • AI
  • Art & Style
  • Economy
  • Entertainment
  • International
  • Market
  • Opinion
  • Politics
  • Sports
  • Trump
  • US
  • World
What's Hot

Asia-Pacific Market: Iran, US, ceasefire, agreement, oil

April 10, 2026

Mikel Arteta contract: Arsenal manager says there is no time to discuss contract extension amid Premier League title battle | Soccer News

April 10, 2026

TechCrunch heads to Tokyo – bringing the startup battleground

April 10, 2026
Facebook X (Twitter) Instagram
Smart Breaking News on AI, Business, Politics & Global Trends | WhistleBuzz
Facebook X (Twitter) Instagram
  • Home
  • AI
  • Art & Style
  • Economy
  • Entertainment
  • International
  • Market
  • Opinion
  • Politics
  • Sports
  • Trump
  • US
  • World
Smart Breaking News on AI, Business, Politics & Global Trends | WhistleBuzz
Home » Stalking victim sues OpenAI, claiming ChatGPT fueled her abuser’s delusions and ignored warnings
AI

Stalking victim sues OpenAI, claiming ChatGPT fueled her abuser’s delusions and ignored warnings

Editor-In-ChiefBy Editor-In-ChiefApril 10, 2026No Comments6 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email


After months of conversations with ChatGPT, the 53-year-old Silicon Valley entrepreneur became convinced he had discovered a cure for sleep apnea and that powerful people were coming after him, according to a new lawsuit filed in California Superior Court in San Francisco County. He then allegedly used the tool to stalk and harass his ex-girlfriend.

Her ex-girlfriend is now suing OpenAI, claiming its technology enabled accelerated harassment of her, TechCrunch has learned exclusively. She claims that OpenAI ignored three separate warnings that the user posed a threat to others, including an internal flag that classified the user’s account activity as related to weapons of mass casualty.

The plaintiff, called Jane Doe, is suing for punitive damages. She also asked the court on Friday to force OpenAI to block users’ accounts, prevent them from creating new accounts, notify them if they attempt to access ChatGPT, and preserve complete chat logs in case they are discovered.

Doe’s lawyer said OpenAI agreed to suspend some users’ accounts, but refused the rest. The company says it is withholding information about any specific plans users may have discussed with ChatGPT to harm Doe or other potential victims.

The lawsuit comes amid growing concerns about the real-world risks of sycophantic AI systems. GPT-4o, the model cited in this case and many others, was retired from ChatGPT in February.

The lawsuit was brought by Edelson PC, the same firm that filed wrongful death lawsuits involving teens Adam Lane and Jonathan Gabaras, who committed suicide after months of conversations with ChatGPT. Gabaras’ family alleges that Google’s Gemini fueled his paranoia and potential mass casualty incident before his death. Lead attorney Jay Edelson warned that AI-induced psychosis is escalating from individual harm to mass casualty incidents.

That legal push is now in direct conflict with OpenAI’s legislative strategy. The company supports an Illinois bill that would exempt AI research institutes from liability in cases involving mass deaths or catastrophic economic damage.

tech crunch event

San Francisco, California
|
October 13-15, 2026

OpenAI was not available for comment. TechCrunch will update this article if companies respond.

The Jane Doe lawsuit details how that responsibility fell on one woman over several months.

A ChatGPT user who joined the lawsuit last year (who is not named in the lawsuit to protect his identity) became convinced that he had invented a cure for sleep apnea after months of “heavy, sustained use of GPT-4o.” When no one took his work seriously, ChatGPT told him that “powerful forces” were monitoring him, including using helicopters to monitor his activities, according to the complaint.

In July 2025, the user’s ex-girlfriend (called Jane Doe to protect her identity) urged him to stop using ChatGPT and seek help from a mental health professional. Instead, he returned to ChatGPT, which assured him he was at “sanity level 10” and served to further strengthen his delusions, according to the lawsuit.

Doe broke up with the user in 2024 and used ChatGPT to process the breakup, according to emails and communications cited in the complaint. Rather than push back on his one-sided explanation, she repeatedly accused him of being unreasonable and unfair and her manipulative and unstable. He then took the AI-generated conclusions from the screen into the real world and used them to stalk and harass her. This manifests itself in several AI-generated clinical psychological reports that he distributed to her family, friends, and employers.

Meanwhile, users continued to spiral. In August 2025, OpenAI’s automated security system flagged him for “weapons of mass casualty” activity and disabled his account.

A member of our human safety team reviewed the account the next day and reinstated it. However, his account may have contained evidence that he was targeting and stalking individuals, including Doe, in real life. For example, a September screenshot sent to Doe by a user showed a list of conversation titles such as “Expanded Violence List” and “Fetal Asphyxia Calculations.”

The decision to reinstate it is notable in the wake of two recent school shootings at Tumbler Ridge in Canada and Florida State University. OpenAI’s safety team had flagged the Tumbler Ridge shooting as a potential threat, but upper management reportedly decided not to alert authorities. The Florida Attorney General’s Office this week launched an investigation into OpenAI’s possible ties to the FSU shooter.

According to Jane Doe’s lawsuit, when OpenAI restored her stalker’s account, his Pro subscription was not restored along with it. He emailed the trust safety team, copied Doe’s message, and resolved the issue.

He wrote in an email something like, “I need help right away. Please call me!” “This is a matter of life and death.” He claimed that he was “writing 215 scientific papers” and was writing them so quickly that he “didn’t even have time to read them.” These emails included a list of dozens of AI-generated “scientific papers” with titles like “Deconstructing Race as a Biological Category_Legal, Scientific, and Horn of Africa Perspectives.pdf.txt.”

“The user’s communications unequivocally informed him that he was mentally unstable and that ChatGPT was the driving force behind his delusional thinking and escalating behavior,” the complaint states. “A series of urgent, chaotic, and spectacular claims by users, along with specific reports generated by ChatGPT that specifically targeted Plaintiff by name and a vast amount of purported ‘scientific’ material, were unmistakable evidence of that reality. OpenAI did not intervene, restrict access, or implement any safeguards. Instead, Plaintiff was able to continue using her account and full professional access was restored.”

Doe filed an abuse notification with OpenAI in November, claiming in the lawsuit that she was living in fear and could not even sleep at home.

“For the past seven months, he has weaponized this technology to create public destruction and humiliation against me that would not have been possible without it,” Doe wrote in a letter to OpenAI, asking the company to permanently ban the user’s account.

In response, OpenAI acknowledged that the report was “extremely serious and concerning” and said it was carefully reviewing the information. There was no reply.

Over the next few months, users continued to harass Doe by sending her a series of threatening voicemails. He was arrested and charged in January with four felonies: communicating a bomb threat and assault with a deadly weapon. Doe’s lawyers argue that this corroborates warnings that both she and OpenAI’s own safety systems raised months ago, warnings that the company allegedly chose to ignore.

Doe’s lawyer said he was found incompetent to stand trial and was committed to a mental health facility, but will soon be released due to “procedural deficiencies by the state.”

Edelson called on OpenAI to cooperate. “In each case, OpenAI chose to hide critical safety information from the public, from victims, and from those actively at risk from its products,” he said. “We’re calling on them to do the right thing, once again. Human lives mean more than OpenAI’s IPO race.”



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Editor-In-Chief
  • Website

Related Posts

TechCrunch heads to Tokyo – bringing the startup battleground

April 10, 2026

Last 24 hours: Save up to $500 on Disrupt 2026 passes

April 10, 2026

Is Anthropic restricting the release of Mythos to protect the internet? Or Anthropic?

April 9, 2026
Add A Comment

Comments are closed.

News

Peruvian presidential election: Top pro-Trump candidate loses ground in final stages | Election News

By Editor-In-ChiefApril 10, 2026

This year, the percentage of Peruvians who distrust the U.S. government rose to 48%, more…

Pakistan sets modest goals for US-Iran summit: agreement to continue negotiations | Pakistan-US-Israel war against Iran News

April 10, 2026

Cuban President Rebellious Despite Trump’s Pressure to Resign | Political News

April 10, 2026
Top Trending

TechCrunch heads to Tokyo – bringing the startup battleground

By Editor-In-ChiefApril 10, 2026

TechCrunch is partnering with SusHi Tech Tokyo 2026, Asia’s largest global innovation…

Stalking victim sues OpenAI, claiming ChatGPT fueled her abuser’s delusions and ignored warnings

By Editor-In-ChiefApril 10, 2026

After months of conversations with ChatGPT, the 53-year-old Silicon Valley entrepreneur became…

Last 24 hours: Save up to $500 on Disrupt 2026 passes

By Editor-In-ChiefApril 10, 2026

This one. The clock is running low. Tonight is your last chance…

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Welcome to WhistleBuzz.com (“we,” “our,” or “us”). Your privacy is important to us. This Privacy Policy explains how we collect, use, disclose, and safeguard your information when you visit our website https://whistlebuzz.com/ (the “Site”). Please read this policy carefully to understand our views and practices regarding your personal data and how we will treat it.

Facebook X (Twitter) Instagram Pinterest YouTube

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Facebook X (Twitter) Instagram Pinterest
  • Home
  • Advertise With Us
  • Contact US
  • DMCA Policy
  • Privacy Policy
  • Terms & Conditions
  • About US
© 2026 whistlebuzz. Designed by whistlebuzz.

Type above and press Enter to search. Press Esc to cancel.