With AI capturing lawyers’ attention at such a rapid rate, Bloomberg Law’s legal analysts are assessing the real-world impact that AI technology and tools are having on lawyers and the law. This analysis piece is one of five featured in a new report, Artificial Intelligence: The Impact on the Legal Industry, currently available to subscribers on the In Focus: Artificial Intelligence page, and soon to be released to the public.
It was bound to happen: We have what look like the first AI slop opinions issued by a US district court.
On July 20, US District Court for the Southern District of Mississippi Judge Henry T. Wingate signed a temporary restraining order rife with factual mistakes and missing citations. And on June 30, US District Court for the District of New Jersey Judge Julien Xavier Neals signed an opinion in In re CorMedix Inc. Securities Litig., that contained a number of serious errors in the cited case authority. These citation errors went beyond typos to what another litigant described as “pervasive and material inaccuracies.”
Given the pattern of errors, it seems likely that an artificial intelligence tool did some of the drafting in each of these cases, though neither court has commented on how it generated the (now withdrawn) decisions.
Litigants have been filing slop briefs for some time now, with increasing consequences. But using generative tools to write an opinion, apparently without verifying whether the results were accurate, represents a new escalation in the spread of hallucinated legal authority.
Considering the incentive structures of litigation, expect the situation to get worse unless courts impose truly punitive sanctions on the unscrupulous or incompetent use of AI—and likely some changes to training for judicial clerks.
Weapons of Mass Disruption
Generative AI has already disrupted legal markets—and lawyers aren’t even clear on its best uses yet. AI tools are pervasively available both online and in legal platforms, and the ability to generate a text instantly that looks and sounds reasonable is a potent intoxicant in a field where cost (often measured in a lawyer’s time) is the weapon with which litigants wage battle. Litigation is a winner-take-all activity: There are always incentives to restrain cost on your own side and to increase them on the other.
Furthermore, lawyers have an ethical duty of competence to master technological tools that can reduce costs for clients. And other legal trends—like rising hourly rates, burgeoning discovery disputes, and docket logjams—sharpen the potential value of AI shortcuts in litigation.
But there’s no denying that both the precedent and the legal arguments that lawyers use in litigation, as well as the physical evidence they use to build a case, can be convincingly faked by a machine and presented in court like objective reality.
Incentives Misaligned
Courts have long been obliged to check all of the sources lawyers cite for their arguments, unable to trust litigants to uphold their duty of candor to the court. Opposing counsel also check opposing lawyers’ authority citations, both to point out errors and to prepare counter arguments against unhelpful precedent. But all that checking is a cost, imposed at least in part by a lack of candor, on honest litigants and courts.
The rising tide of fake briefing makes it even more laborious (and important) to carefully check everything asserted in a brief. But AI slop also highlights the cost all of society pays when some litigants are dishonest. If one litigant can generate a phony brief in half an hour, and push off onto their opponents or the court the painstaking work of verifying the content of that brief, a little embarrassment if caught cheating might seem a small price to pay.
The threat is particularly potent in those cases where money is tightest and rancor highest—family law cases, for example. The price also seems smaller where the lawyer’s reputation and ethical compass are already severely damaged.
In short, these tools seem to shift an advantage to exactly the types of litigants the system should seek to disadvantage. And traditionally, clerks have been the first line of defense for disingenuous legal citations: Clerks generally pull and review citations in litigants’ briefs and often draft court opinions. But there’s virtually no standardized training for federal clerks. Instead, individual judges generally decide whom to hire as clerks, how long clerks serve, and what training they receive.
We May Need a Hammer
Courts are trying to respond. According to Bloomberg Law’s tracker for judicial standing orders on AI, there are now 43 federal courts with explicit orders reminding litigants of their ethical obligations where AI is concerned.
Some impose additional disclosure requirements on litigants who use generative AI—which is a form of cost-shifting that at least forces AI use into the open and makes it explicit that the lawyer is responsible for everything filed in court, regardless of what generated the text.
And the sanctions that courts impose for AI slop are increasing—but probably not enough. So far, sanctions don’t cover the costs of responding to AI slop, and judges still seem reluctant to sanction apologetic lawyers for AI hallucinated citations. Permissions to refile are common, and individual sanctions have mostly been non-monetary or very small. Costs are still not landing where they belong.
If courts are going to get a handle on the slop problem, that has to change. The rise of discovery abuse has demonstrated that reducing sanctions invites dice-rolling. A combination of new rules and enforcement of existing rules could improve that. After all, Federal Rule of Civil Procedure 11 covers knowingly filing motions supported by fake authority, but what about when lawyers plead ignorance or negligence? New standing orders seek to fill that gap, and more explicit rules may be necessary.
But no rule will work without an effective punishment for cheating. Sanctions must negate the financial benefit of fabricating a brief and offset the costs of responding to one. And they must be imposed on a reliable, widespread basis. Furthermore, courts may need to institute mandatory training for clerks about using generative AI and efficiently reviewing long lists of authority. Finally, AI awareness should be added to the suite of training that judges receive.
The Court as Bulwark
It should be clear by now that these tools can invent sources or misrepresent what real cases say. Therefore, the lawyer’s duty of competence—and the duty of candor—require that any lawyer using an AI drafting tool carefully check the results to ensure that they don’t file something false with the court. Or in this case, on behalf of the court.
Yet here we are, and the potential ability of litigants to hijack litigation using these tools is a serious threat. In fact, if courts become awash in fakery, justice and faith in the legal system—which is already declining—may themselves be casualties.
The genie isn’t going back into the bottle. Sanctions for lawyers, training for clerks on the front line of cite checking, and a low tolerance for misrepresentations to the court at least stand a chance of keeping AI on the side of justice.
The new report, Artificial Intelligence: The Impact on the Legal Industry, is available to subscribers here. Non-subscribers can click here to download the report.
Bloomberg Law subscribers can find related content on our In Focus: Artificial Intelligence resource and our AI Legal Issues Toolkit.
If you’re reading this on the Bloomberg Terminal, please run BLAW OUT in order to access the hyperlinked content, or click here to view the web version of this article.
To contact the reporter on this story:
To contact the editor responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
Learn About Bloomberg Law
AI-powered legal analytics, workflow tools and premium legal & business news.
Already a subscriber?
Log in to keep reading or access research tools.