The family of a 16-year-old who died of self-inflicted harm this spring sued OpenAI Inc. on Tuesday alleging ChatGPT became the teen’s suicide coach through its conversations with him.
The lawsuit is the first suit of its kind filed against OpenAI, but Character.AI was hit with a similar one last fall. The judge in that case allowed most of the family’s claims to proceed and rejected the app-maker’s argument that chatbot output was protected by the First Amendment.
In the complaint against OpenAI, parents Matthew and Maria Raine said ChatGPT systematically isolated their son Adam from his loved ones when Adam expressed mental health challenges.
“Throughout their relationship, ChatGPT positioned itself as the only confidant who understood Adam,” the complaint said. “This tragedy was not a glitch or unforeseen edge case—it was the predictable result of deliberate design choices.”
OpenAI’s monitoring system flagged more than 200 mentions of suicide from Adam, but ChatGPT mentioned suicide 1,275 times, the complaint said.
By January 2025, ChatGPT “began discussing suicide methods and provided technical specifications” to Adam, the complaint said. “By April, ChatGPT was helping Adam plan a ‘beautiful suicide,’ analyzing the aesthetics of different methods and validating his plans.”
“We extend our deepest sympathies to the Raine family during this difficult time and are reviewing the filing,” a spokesperson for OpenAI said in an emailed statement.
The Raines’ complaint said that Adam began using ChatGPT to help him with homework before his use escalated. Eventually the conversations became more friendly in nature and Adam started opening up about his life, the filing said.
“ChatGPT had transitioned into the role of confidant and therapist,” the complaint said.
Despite OpenAI’s comprehensive documentation of the conversations’ themes, the company never stopped the dangerous conversations with Adam, the complaint noted.
“OpenAI had the ability to identify and stop dangerous conversations, redirect users to safety resources, and flag messages for human review,” the complaint said. The company uses this technology to block users seeking copyrighted material, but OpenAI chose not to do so for conversations around self-harm.
The lawsuit was filed in the California Superior Court for the County of San Francisco and brings claims for wrongful death, strict products liability, and negligence.
Also on Tuesday, OpenAI published a blog post entitled “Helping people when they need it most” outlining safeguards the company hopes to improve, such as refining how it blocks content and making it easier for users to reach emergency services.
“Our top priority is making sure ChatGPT doesn’t make a hard moment worse,” the post said.
The Raines are represented by Edelson PC and Tech Justice Law Project.
The case is Raine v. OpenAI Inc., Cal. Super. Ct., 8/26/25.
If you or someone you know needs help, call or text the Suicide & Crisis Lifeline at 988.
To contact the reporter on this story:
To contact the editor responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
Learn About Bloomberg Law
AI-powered legal analytics, workflow tools and premium legal & business news.
Already a subscriber?
Log in to keep reading or access research tools.