Victims of real-world violence inflicted at the hands of heavy ChatGPT users are forcing courts to evaluate whether developers can be held responsible when chatbot use leads to tragedy.
OpenAI and CEO Sam Altman were hit with multiple lawsuits last month from the families of victims of the Tumbler Ridge mass shooting in Canada, alleging that chief suspect Jesse Van Rootselaar used ChatGPT to plan the attack. The complaints turn on the product liability theory that ChatGPT was defectively designed, OpenAI was negligent, and that OpenAI should have notified authorities about the alleged shooter’s plans.
The new suits are emblematic of the growing tension between rapidly developing technology and a public cry for accountability as unforeseen harms manifest.
AI isn’t “going away and it’s kind of an arms race as all these companies are trying to get market share,” said Lee Paris, a partner at Davis Goldman PLLC. “Are they cutting safety corners to get to the top?”
The questions are when the does the company cross the line from hosting content to encouraging a person to take the next step or suggesting an action, and when does it have an obligation to review and possibly report the activity, he said.
“We’re not going to send a computer to jail, but the bots are manufactured and owned by companies with human beings who are making a lot of money and creating the product,” said Carrie Goldberg, founder of C.A. Goldberg PLLC.
“If humans are creating technology without the safeguards for the content that the bots are producing, then the manufacturer has to be responsible for that content,” she said.
Failure to Warn
The latest round of lawsuits focus on OpenAI’s failure to notify police of Van Rootselaar’s potential for violence even after flagging her account for gun violence and planning activity.
“I am deeply sorry that we did not alert law enforcement to the account,” Altman wrote in a letter published by local news site Tumbler RidgeLines.
Earlier this week, OpenAI also was hit with a civil lawsuit from a the widow of a victim of a mass shooting at Florida State University that killed two people. Florida Attorney General James Uthmeier (R) opened a criminal probe as well, noting that the alleged shooter asked ChatGPT for advice on weapons, ammunition, and how busy campus was ahead of the incident.
“It feels like society is moving really quickly on AI and the people who are in control of it and who know it the best don’t really care about the harm that they’re creating,” said Jay Edelson, the founder and CEO of Edelson PC, which represents plaintiffs in the Tumbler Ridge cases.
The firm also has sued OpenAI on behalf of a family who said ChatGPT became their teen son’s suicide coach and a woman who said the chatbot helped her ex-boyfriend stalk and humiliate her.
A spokesperson for OpenAI called the FSU shooting a tragedy, but said ChatGPT wasn’t responsible and that after the incident, the company proactively shared information with law enforcement.
“ChatGPT provided factual responses to questions with information that could be found broadly across public sources on the internet, and it did not encourage or promote illegal or harmful activity,” the spokesperson said.
The spokesperson said the Tumbler Ridge incident also was a tragedy, and that the organization has already strengthened safeguards.
Anthropomorphic Design
With many product liability claims, the nature of the relationship between the product maker and the user determines what the designer can be accountable for.
Chatbots’ anthropomorphic design can foster a relationship that can be nurtured or exploited, said Frances Green, of counsel at Epstein Becker Green PC. This can create the “special relationship” required for liability because the chatbots are designed to encourage emotional connection, she said.
But it’s unclear whether that relationship has created a duty for chatbot developers to guarantee the tool’s safety or to warn users of risks, she added.
In the absence of comprehensive legislation or regulations, the cases will likely turn on whether it was foreseeable to the developer that the chatbot could be misused in ways that cause harm, Green said.
Some dangerous products, like guns, have a regulatory scheme that protects manufacturers if the guns are made in a specific way. Chatbot developers don’t have such protections, Goldberg said.
Nor do chatbots face regulatory oversight, like cosmetics that must be approved by the Food and Drug Administration, Edelson said.
“Now you’ve got the most powerful consumer technology ever, and it’s out of control and nobody is checking to see whether it’s going to lead to all of these deaths,” he said.
Potential Defenses
The courts haven’t yet decided whether chatbot output is legally protected speech, a defense OpenAI is likely to assert.
In the first federal chatbot harm case, Judge Anne C. Conway of the US District Court for the Middle District of Florida declined to rule that chatbot output was protected by the First Amendment, punting the free speech questions.
Unlike social media platforms like Facebook and Craigslist, which can argue they merely host content made by others and are therefore immune from liability under Section 230 of the Communications Decency Act, ChatGPT has components of encouragement and suggesting further actions to users, Paris said.
Goldberg pointed to a case where survivors of a mass shooting in Buffalo, N.Y., sued YouTube and Reddit for their alleged role in radicalizing the shooter. The companies eventually were dismissed from the case based on Section 230 immunity.
But with chatbots, “feeding the shooters’ actual rhetoric about violence or assisting them is really different from just publishing or letting people see the content,” she said.
Green said it will take a while to figure out the right safeguards around AI.
The chatbot harm litigation “really comes down to a foundationally radical examination” of who we are as “developers and designers of ethical and responsible products,” she said. “It goes much deeper, and it’s got to start before the courtroom.”
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.