A European model for litigating AI should be applied to the US court system, Albert Fox Cahn and Nina Loshkajian of the Surveillance Technology Oversight Project say.
When police use racially biased facial recognition algorithms, and employers use discriminatory algorithms to screen job applicants, there’s often no way to prove they broke the law because of procedural roadblocks to challenging artificial intelligence in court.
Luckily, a simple change to the rules of civil procedure can give those harmed by AI their day in court. And the model for it is already in Europe—the EU Liability Directive. It’s revolutionary, and it’s America’s only hope for real AI protections.
Opaque Technology
Right now, technology under the AI label is opaque to those it harms. Scanned by police facial recognition? Rejected by an AI hiring tool? Often you have no idea that AI evaluated you, let alone that you failed.
We know how to prove human bias in court, with decades of case law illustrating how to plead and prove a claim. Illegal bias in workplaces, education, and government, while too often unchecked, is actionable using biased statements from defendants and statistical data (such as from housing testers).
In these situations, plaintiffs can sue, and if they provide enough facts to survive a motion to dismiss, get to the discovery that enables them to win at trial. But everything is different with algorithms.
When Carmen Arroyo applied for a Connecticut apartment for her and her severely disabled son, their fate may have rested in the hands of a CoreLogic algorithmic tool that allegedly couldn’t see that Carmen’s son, Mikhail, was unable to speak or walk.
According to the complaint in a pending case, the landlord got the CoreLogic report that Mikhail had a criminal record—an arrest for shoplifting that was later dismissed. He then was said to have denied the family the apartment, changing the course of their lives.
CoreLogic’s background check service allegedly used an algorithm that isn’t transparent, but that likely used scraping technology to search everything on the web, ultimately generating a score for the Arroyos as potential tenants. CoreLogic then allegedly refused to provide Arroyo a copy of Mikhail’s criminal report.
Had a human being acted this way, it would have been a simple case, but Arroyo has been stuck in litigation for more than five years, with the case currently on appeal. Many plaintiffs never get as far as the Arroyos, unable to show how the AI system that harmed them works.
So where do we go from here? With corporate self-regulation increasingly the norm, our complacency risks a world where AI makes harmful decisions without any meaningful redress. Tech giants such as Alphabet Inc.’s Google have formed the Frontier Model Forum as the latest foray into corporate-led regulation. But those of us who fight back against harmful tech aren’t holding our breath waiting for real restrictions.
Efforts in Washington, D.C., aren’t much more promising. Even the most motivated regulators are hamstrung in what changes they can push under. President Joe Biden’s executive order contains platitudes, not public policy.
EU Solutions
Across the Atlantic, European regulators have responded more aggressively, such as with the much-touted EU AI Act. Such an approach may not be tenable under the American system, where regulators tend to be much less eager to address the threat posed by emerging technologies or are blocked in the courts from doing so.
But there is one aspect of the EU approach that maps on marvelously to the American legal system. The EU Liability Directive puts the burden on defendants in AI litigation, forcing them to prove that their system operated legally or never impacted the plaintiff. This is an inversion of the standard civil procedure.
AI burden shifting eliminates the Catch-22 that forces plaintiffs to first prove how an AI system is unlawful to get the discovery needed to make such a claim. Given the high bar to pleading a civil claim under American law, and given the opacity of AI systems, the courts will remain firmly closed to most AI plaintiffs unless this change is made.
But the benefits go even further. By modifying and implementing the EU Liability Directive to fit the American court system, we can address the incentives AI creates for institutions.
Today, if you are making a high-risk, high-liability decision, you have every reason to put it in the opaquest AI system you can. The more complex, the better, because complexity will make it impossible for your system’s victims to have their day in court. Burden shifting can align incentives so companies and governments only use AI where they can do so with confidence and use the most explainable, transparent AI tools available.
We also can enact safeguards to prevent frivolous litigation. Defendants could rebut the burden by showing their system operated lawfully or never harmed the plaintiff.
And for those plaintiffs that win on a motion to dismiss, discovery could be confined to analyzing the algorithm, giving defendants a chance to move for expedited summary judgment on solely the question of the algorithm’s lawfulness before delving into further discovery.
If this change sounds radical, that’s because it is. AI harms pose one of the gravest challenges to civil adjudication since the industrial revolution. And if we’re unable or unwilling to reform our civil procedure rules with the creativity and flexibility this moment requires, a generation of Americans harmed by AI may be blocked from receiving the justice they deserve.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.
Author Information
Albert Fox Cahn is founder and executive director of the Surveillance Technology Oversight Project.
Nina Loshkajian is staff attorney at the Surveillance Technology Oversight Project.
Write for Us: Author Guidelines
To contact the editors responsible for this story:
Learn more about Bloomberg Tax or Log In to keep reading:
Learn About Bloomberg Tax
From research to software to news, find what you need to stay ahead.
Already a subscriber?
Log in to keep reading or access research tools.