Character.AI’s announcement that it will ban users under 18 from conversing with chatbots is generating questions about the company’s planned age verification process and potentially broader chatbot harms, even as child safety advocates praised the move.
The artificial intelligence company said Wednesday that by Nov. 25 it will identify users under 18 and stop them from having open-ended conversations with customizable chatbots, putting other restrictions in place until then. In the same statement, the company said it would be developing a separate under-18 experience by that deadline.
The announcement comes in the wake of a growing number of civil lawsuits filed over the last year by the families of teens who experienced mental health crises or died by suicide after becoming emotionally attached to chatbots developed by Character.AI and OpenAI.
“This is the only time in 30 years of litigation that I’ve seen a company take responsibility in this manner,” said Matthew Bergman, the founder of the Social Media Victims Center and one of the attorneys representing the families. “This is a step in the right direction and it appears to be a big step.”
Bergman said he hoped other companies follow suit, stressing that chatbot developers “still need to take responsibility for what they’ve done, but hopefully there won’t be any more cases in the future.”
Character.AI’s latest move is part of a tradition of the media industry self-regulating when it comes to minors, said Kathleen Farley, vice president of litigation for the tech industry trade association Chamber of Progress. She compared Character.AI’s voluntary age verification requirement to the way the movie and video-game industries handle age ratings.
“The First Amendment is all about private actors choosing what they do and do not say, choosing where they do and do not want to enter into dialogue using the audience that they choose to engage with,” Farley said. Character.AI’s decision is “the kind of decision that the First Amendment encourages to people to make,” she said.
Character.AI’s decision is a “powerful” example, she said, and more tech companies are likely to follow suit on age verification in response to government probes and public outcry generated by the litigation.
Companies “are seeing how their customers respond, and they’re making their own judgments about what their product is supposed to do,” Farley said.
Litigation to Continue
But what Character.AI does in the future doesn’t erase the events of the past, and so existing litigation isn’t likely to end, Farley said.
And for their part, advocates and litigants are skeptical that the limits the company has placed on itself will be enough.
Megan Garcia, the first plaintiff to sue Character.AI after her son died by suicide, said in an emailed statement that Character.AI’s “solution for age verification is meaningless without transparency of how this will be achieved.”
Character.AI’s lack of transparency about its intended age verification process underscores the need for greater regulation like the Kids Online Safety Act and another proposed bill introduced in the Senate earlier this week to ban AI companions for minors.
Meetali Jain, executive director of the Tech Justice Law Project, said in a statement that Character.AI’s announcement was a “good first step to ensuring these products are safer,” but it’s not detailed enough.
Imposing an age limit “reflects a classic move in the tech industry’s playbook: move fast, launch a product globally, break minds, and then make minimal product changes after harming scores of young people,” Jain said in the statement.
She added in an interview with Bloomberg Law that age limits should have been part of Character.AI’s product release strategy, and that the ban “doesn’t change any of the incentives to design these products differently” because there are also “adults who we’ve seen are susceptible to the manipulation baked into these products.”
The company also has talked about opening other forms of entertainment media to users under 18 without explaining what that really means, Jain said.
The lawsuits brought on behalf of minors revealed “the tip of the iceberg” when it comes to chatbot harms, she said.
The next wave, Jain said, will likely focus on harms to adults.
If you or someone you know needs help, call or text the Suicide & Crisis Lifeline at 988. You can also reach a crisis counselor by messaging the Crisis Text Line at 741741.
To contact the reporter on this story:
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.