- Google, Character A.I. founders remain as defendants in suit
- Companies didn’t explain why LLM words are speech, judge said
Character Technologies Inc. failed to convince a federal judge Wednesday to wholly toss a suit alleging it designed and marketed predatory chatbots to minors that encouraged inappropriate conversations and led to a teen’s death by suicide.
In a first-of-its-kind ruling, the US District Court for the Middle District of Florida denied dismissal on a variety of tort claims, including wrongful death and a violation of the Florida Deceptive and Unfair Trade Practices Act. The court dismissed a claim for intentional infliction of emotional distress.
The novel case is sparking questions over an emerging body of law over what constitutes a design defect for consumer-oriented AI products. The Character A.I. app allows users to interact with customizable chatbots.
Character A.I. is also facing another suit in federal district court in Texas that alleges its chatbots encouraged an autistic 17-year-old to self-harm and kill his parents for imposing screen time limits. That case has been paused for arbitration proceedings.
Plaintiff Megan Garcia, whose son allegedly died by suicide as a result of his interactions with the app, alleged a “considerable level of involvement in Character Technologies’ LLM” to consider Google liable as a component part manufacturer, Conway said. The complaint adequately pleaded that Google “substantially participated in integrating its models into Character A.I.,” the judge said.
The complaint “repeatedly” alleged that the LLM’s anthropomorphic nature and its integration made the app defective, Conway said.
Conway said the complaint sufficiently alleged individual defendants Noam Shazeer and Daniel De Frietas formed Character Technologies to bypass Google’s safety protocols through an acquihire deal. Shazeer and De Freitas joined Google as part of a larger licensing deal between the entities.
“We strongly disagree with this decision. Google and Character AI are entirely separate, and Google did not create, design, or manage Character AI’s app or any component part of it,” Google Spokesman José Castañeda said in an email.
The tech companies argued that users have listener speech rights to chatbot interactions, so the app’s output was protected by the First Amendment.
But the companies “fail to articulate why words strung together by an LLM are speech,” Conway said.
The tech companies relied on analogizing the chatbot app to non-player characters in video games and interactions with other people on social media. But they “miss the operative question,” which is whether the app’s output is speech, she said.
“The Court is not prepared to hold that Character A.I.'s output is speech,” Conway said.
The judge also rejected the argument that Character A.I. is a service rather than a product. The complaint includes allegations related to the design choices that allegedly were defective, Conway said.
Though Garcia’s son “may have been ultimately harmed by interactions with Character A.I. Characters, these harmful interactions were only possible because of the alleged design defects in the Character A.I. app.,” Conway said.
“With today’s ruling, a federal judge recognizes a grieving mother’s right to access the courts to hold powerful tech companies — and their developers — accountable for marketing a defective product that led to her child’s death,” Garcia’s lawyer, Meetali Jain, said in an emailed statement. “This historic ruling not only allows Megan Garcia to seek the justice her family deserves, but also sets a new precedent for legal accountability across the AI and tech ecosystem.”
“It’s long been true that the law takes time to adapt to new technology, and AI is no different,” a spokesperson for Character A.I. said in an email. “In today’s order, the court made clear that it was not ready to rule on all of Character.AI’s arguments at this stage and we look forward to continuing to defend the merits of the case.”
If you or someone you know needs help, call the National Suicide Prevention Lifeline at 988. You can also reach a crisis counselor by messaging the Crisis Text Line at 741741.
The case is Garcia v. Character Techs., Inc., M.D. Fla., No. 6:24-cv-01903, 5/21/25.
—With assistance from Malathi Nayak
To contact the reporter on this story:
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
Learn About Bloomberg Law
AI-powered legal analytics, workflow tools and premium legal & business news.
Already a subscriber?
Log in to keep reading or access research tools.