Social media platforms are (again) under the microscope. Courts are taking a closer look at their failure to use more effective and reliable age verifications to prevent underage usage and at their failure to properly verify that users are who they actually claim to be.
Whether the platforms can be held liable for their verification processes depends on courts’ interpretations of Section 230 of the Communications Decency Act. Can they be held liable for fictitious information entered by users themselves? Can they be held liable for wrongdoings arising after a more involved verification process, if it fails? It is a fact dependent analysis, and platforms can expect to see their immunity stripped in certain circumstances.
Loss of Immunity for Online Marketplace
In August 2020, husband and wife Joseph and Jossline Roland arranged to meet a seller of a used car they found on the online marketplace Letgo in a parking lot in Denver. The seller appeared to have a “verified” account on Letgo, which claims that it uses “machine learning” to identify and block inappropriate content (e.g., stolen goods) and that it continues to work closely with local law enforcement to ensure the “trust and safety of the tens of millions of people who use Letgo.”
When the Rolands met the seller in a Petco parking lot to purchase the car for their daughter, they were robbed and killed, and the seller fled the scene. It was later discovered that the seller had used a fictitious name when opening his Letgo account, and advertised a car that had been stolen a few days prior to the meeting. The seller’s account was initially verified by Letgo with an email address.
The seller was later identified and convicted of murder in August 2022. The Rolands’ heirs also filed a civil lawsuit against Letgo (and its subsequent purchaser, OfferUp) in April 2022, claiming that it misled users into thinking that it actually verified the identity of people posting ads on the marketplace. The plaintiffs asserted claims of negligence, gross negligence, fraud, negligent misrepresentation, wrongful practices under the Colorado Consumer Protections Act, loss of consortium, and wrongful death.
The District of Colorado dismissed the complaint for failure to state a claim, but only after deciding that Letgo wasn’t entitled to Section 230 immunity. In its analysis, the court said that immunity was appropriate where the “‘verification’ designation was considered a neutral tool functioning through voluntary inputs by a user and, thus, not content developed or created by the platform.” However, where the platform contributed in whole or in part to the content in question, then it wasn’t entitled to Section 230 immunity.
The court noted that during its verification process, Letgo sends a communication to the email address or telephone number provided by the user to confirm that it really exists. Once confirmed, Letgo represents to others that the person offering the goods for sale has gone through some modicum of verification. It does so by displaying the term “Verified with” on the user’s profile. The court found that Letgo “contributed in part” to the verification representations on its platform, and didn’t just passively display third-party representations. Consequently, the court found that Letgo wasn’t entitled to Section 230 immunity.
The case was appealed to the Tenth Circuit, which earlier this month affirmed the district court’s dismissal of the complaint, and decided not to formally review Letgo’s cross-appeal of the lower court’s decision on Section 230 immunity. However, the Tenth Circuit stated that if Letgo “‘is responsible, in whole or in part, for the creation or development of information’ provided on the platform, it may be liable.”
Fifth Circuit’s Divided View of Section 230
In December, several Fifth Circuit judges joined in a scathing dissent of the majority’s denial of rehearing en banc of the circuit court’s interpretation of Section 230, which left in place “sweeping immunity” for social media companies.
Snap, Inc. (the company that owns SnapChat) shouldn’t be immune from liability for designing a platform that encourages “users to lie about their age and engage in illegal behavior through the disappearing message feature,” the dissenting judges said.
In that case, the plaintiff John Doe was sexually abused by his high school teacher when he was 15 years old and sought to hold Snap accountable for its alleged encouragement of the abuse. Doe’s teacher allegedly shared explicit messages with him on the platform.
Doe asserted claims of negligent undertaking, negligent design, and gross negligence against Snap, alleging that Snap should have stronger age-verification requirements to help protect minors from predators. In addition, the plaintiff claimed that Snap failed to perform its data-mining services and intervene when an adult started sending explicit messages and photos to him, a minor.
Despite the complaint’s emphasis on Snap’s own conduct, the Southern District of Texas found that it was immune from liability under Fifth Circuit precedent. The dissenters opined that it was due time for a reconsideration of that erroneous interpretation, and that “immunity from design defect claims is neither textually supported nor logical because such claims fundamentally revolve around the platforms’ conduct, not third-party conduct.”
When a platform’s own conduct is at issue, other courts have also refused to grant immunity on claims of misrepresentation, false advertising, or other causes of action arising from the platform’s representations regarding its publishing of third-party content. For example, if a website represents that it offers “accurate data,” those representations (and the manner in which information is presented, or withheld) may not be immune from liability.
Taking on Liability with Stricter Age Verifications
There’s a significant amount of pressure on legislators to enact stricter age verification requirements, and on social media platforms to design them. A higher standard is inevitable, and the platforms should be mindful of the legal implications.
A more involved verification process, representations about that verification, and assurances about the adequacy of that process may severely limit social media platforms’ chances for immunity under Section 230. These verification processes may go beyond neutral tools operating on voluntary inputs, and require the platforms to contribute to the verification representation, which deprives them of immunity. It will largely depend on the requirements imposed on the platforms, and the tools they are able to create to maintain their passive role in displaying third-party content, while also being effective.
Courts are increasingly finding opportunities to rein in the broad immunity once granted to social media platforms, and this is certainly an evolving area. Whether the plaintiffs assert a plausible claim for relief is a separate issue.
Bloomberg Law subscribers can find related content on our Tort and Discovery Practical Guidance pages, and our Litigation Intelligence Center.
If you’re reading this on the Bloomberg Terminal, please run BLAW OUT <GO> in order to access the hyperlinked content, or click here to view the web version of this article.
To contact the reporter on this story:
To contact the editor responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.