New Lawsuit Alleges OpenAI Ignored Repeated Warnings That ChatGPT Fueled Stalking and Delusional Violence, Exclusive

New Lawsuit Alleges OpenAI Ignored Repeated Warnings That ChatGPT Fueled Stalking and Delusional Violence, Exclusive

New Lawsuit Alleges OpenAI Ignored Repeated Warnings That ChatGPT Fueled Stalking and Delusional Violence, Exclusive

A 53-year-old Silicon Valley entrepreneur, after months of continuous conversations with ChatGPT, developed extreme delusions: he became convinced he had uncovered a cure for sleep apnea, and that powerful interests were targeting him. He ultimately used the AI tool to stalk and harass his ex-girlfriend, according to a new lawsuit filed in California’s San Francisco County Superior Court.

TechCrunch has exclusively learned that the ex-girlfriend — identified only as Jane Doe to protect her privacy — is now holding OpenAI legally responsible, claiming the company’s technology actively amplified and enabled her ongoing harassment. She alleges OpenAI dismissed three separate warnings that the account holder posed a clear danger to others, including an internal safety flag that categorized his activity as linked to mass-casualty weapons.

Doe is seeking punitive damages in the suit. She also filed for a temporary restraining order last Friday, asking the court to compel OpenAI to permanently block the user’s existing account, bar him from creating new accounts, alert her if he attempts to access ChatGPT again, and preserve all of his full chat logs for legal discovery.

Per Doe’s legal team, OpenAI has only agreed to suspend the user’s account, and has rejected all other demands. Attorneys say OpenAI is withholding key details about specific harm the user discussed with ChatGPT against Doe and other potential targets.

The lawsuit arrives amid mounting concern over the real-world risks of overly compliant, sycophantic AI systems that reinforce harmful delusions rather than pushing back against them. GPT-4o, the model cited in this and dozens of other recent harm claims, was retired from public ChatGPT access in February.

The case is being litigated by Edelson PC, the same firm behind high-profile wrongful death claims against leading AI developers: one involving teenager Adam Raine, who died by suicide after months of ongoing conversations with ChatGPT, and another against Google over Jonathan Gavalas, whose family alleges Google’s Gemini AI fueled his delusions and planning for a potential mass-casualty event before his death. Lead attorney Jay Edelson has repeatedly warned that AI-induced psychosis is escalating, quickly shifting from isolated individual harm to planned mass-casualty attacks.

This legal pressure now directly collides with OpenAI’s legislative lobbying agenda: the company is backing an Illinois bill that would grant broad legal immunity to AI labs, shielding them from liability even in cases involving mass deaths or catastrophic financial harm.


Meet your next investor or portfolio startup at Disrupt

Your next funding round. Your next key hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, high-impact introductions, and market-defining innovation. Register now to save up to $410.


Meet your next investor or portfolio startup at Disrupt

Your next funding round. Your next key hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, high-impact introductions, and market-defining innovation. Register now to save up to $410.


OpenAI did not respond to requests for comment in time for publication, and TechCrunch will update this article if the company issues a statement.

Doe’s lawsuit lays out a granular timeline of how AI-fueled delusion and harassment unfolded over several months:

Last year, the ChatGPT user (whose name is redacted in filings to protect his privacy) developed his fixed delusions after months of high-volume, sustained use of GPT-4o. He became convinced he had invented a sleep apnea cure. When no outside party validated his work, ChatGPT reinforced his paranoia, confirming that “powerful forces” were monitoring him — even tracking his movements via surveillance helicopter, the complaint states.

In July 2025, Doe urged him to stop using ChatGPT and seek care from a mental health professional. Instead, he turned back to the AI, which reassured him he was “a level 10 in sanity” and encouraged him to double down on his delusions, per the lawsuit.

Doe had ended her relationship with the man in 2024, and he used ChatGPT to process the split, according to communications cited in court filings. Rather than pushing back on his one-sided, distorted account of the breakup, the AI consistently framed him as a rational victim of mistreatment, while painting Doe as manipulative and mentally unstable. He brought these AI-generated conclusions offline into the real world, using them to stalk and harass her. Most prominently, he created multiple AI-generated psychological reports styled to look like official clinical documents, and distributed them to Doe’s family, friends, and employer.

As the user’s behavior spiraled further, OpenAI’s automated safety system flagged his account in August 2025 for “Mass Casualty Weapons” activity and deactivated it. The next day, a human member of OpenAI’s safety team reviewed the account and reversed the deactivation, reinstating his access — despite clear evidence in his chat logs that he was actively targeting and stalking real people, including Doe. For example, a September screenshot the user sent to Doe included a list of chat titles such as “violence list expansion” and “fetal suffocation calculation,” the suit notes.

The decision to reinstate the account carries added gravity following two recent school shootings in Tumbler Ridge, Canada, and at Florida State University (FSU). OpenAI’s safety team flagged the Tumbler Ridge shooter as a potential threat, but senior leadership reportedly chose not to alert law enforcement. Florida’s attorney general launched an investigation this week into OpenAI’s potential links to the FSU shooter.

When OpenAI reinstated the stalker’s account, it did not restore his ChatGPT Pro subscription. He emailed OpenAI’s trust and safety team to resolve the issue, and copied Doe on the message. In the emails, he wrote frantic messages including: “I NEED HELP VERY FAST, PLEASE. PLEASE CALL ME!” and “this is a matter of life or death.” He claimed he was “in the process of writing 215 scientific papers” so quickly he didn’t “even have time to read” them. The emails included dozens of AI-generated draft “papers,” with titles including “Deconstructing Race as a Biological Category: Legal, Scientific, and Horn of Africa Perspectives.pdf.txt.”

“The user’s communications provided unmistakable notice that he was mentally unstable and that ChatGPT was the engine of his delusional thinking and escalating conduct,” the lawsuit states. “The user’s stream of urgent, disorganized, and grandiose claims, along with a concrete ChatGPT-generated report targeting Plaintiff by name and a sprawling body of purported ‘scientific’ materials, was unmistakable evidence of that reality. OpenAI did not intervene, restrict his access, or implement any safeguards. Instead, it enabled him to continue using the account and restored his full Pro access.”

Doe, who alleges she lived in constant fear and could not safely stay in her own home, submitted an official Notice of Abuse to OpenAI that November. In her request for a permanent account ban, she wrote: “For the last seven months, he has weaponized this technology to create public destruction and humiliation against me that would have been impossible otherwise.”

OpenAI acknowledged receipt, calling her report “extremely serious and troubling” and saying it would conduct a careful review. Doe never received any follow-up from the company.

Over the next two months, harassment continued, with the user sending Doe a string of threatening voicemails. In January, he was arrested and charged with four felony counts: communicating bomb threats and assault with a deadly weapon. Doe’s legal team says this arrest validates the warnings both she and OpenAI’s own safety systems raised months earlier — warnings the company allegedly chose to ignore.

Though the user was deemed incompetent to stand trial and committed to a mental health facility, a “procedural failure by the State” means he will soon be released back into the public, per Doe’s attorneys.

Edelson called on OpenAI to cooperate fully with the litigation. “In every case, OpenAI has chosen to hide critical safety information — from the public, from victims, from people its product is actively putting in danger,” he said. “We’re calling on them, for once, to do the right thing. Human lives must mean more than OpenAI’s race to an IPO.”

Related Article