ANALYSIS: Lawyers’ AI Slip-Ups Generate Lessons for All

December 15, 2023, 10:00 AM UTC

When a New York lawyer made national news earlier this year—and was subsequently sanctioned—for citing fabricated court opinions derived from ChatGPT, the expectation was that practitioners across the country would be more prudent when using generative artificial intelligence (AI) in court filings. At the very least, to avoid the public embarrassment and potential sanctions.

Yet this month, a second New York lawyer, who represented former Donald Trump lawyer Michael Cohen, has been ordered to show cause that the case law he cited in Cohen’s motion for early termination of supervised release actually exists, despite the court’s concerns that it doesn’t.

And another lawyer—this time in Colorado—has been disciplined for his blind reliance on ChatGPT.

These incidents serve as a warning of the potential consequences for mindlessly using generative AI, and as a reminder of a lawyer’s professional responsibilities.

A New York (ChatGPT) State of Mind

Michael Cohen’s lawyer filed a motion in November for early termination of Cohen’s supervised release, citing three cases that supported the argument for granting early termination. On Dec. 6, another lawyer appeared for Mr. Cohen, and notified the court in a response letter that she was unable to verify those citations. Notably, the government didn’t acknowledge these fictitious cases in their opposition.

The Southern District of New York then issued an order to show cause, directing the originating lawyer to provide copies of the cited decisions no later than Dec. 19. If the lawyer is unable to provide them, he must show cause why he shouldn’t be sanctioned for citing nonexistent cases to the court, the order says.

Colorado Rocky for Lawyer

The Colorado lawyer submitted a motion in May with fictitious case law sourced from ChatGPT. He failed to verify the sources before submitting the motion and—when the lawyer discovered the fabricated case law before the motion hearing—he failed to disclose the issue to the court or withdraw the motion.

When the court inquired about the case law at the hearing, the lawyer falsely blamed a legal intern. Later, the lawyer admitted in an affidavit that he used ChatGPT to draft the motion, and was subsequently suspended for one year and one day, inclusive of a probationary period.

A Lawyer’s Professional Responsibilities

Generative AI has the ability to revolutionize the legal industry, offering ways to streamline time-consuming tasks for overworked lawyers.

But that’s no excuse to submit to it blindly.

If lawyers are going to use generative AI in their practice, they have to be more cautious.

1. Engage in Due Diligence

There’s no debate that ChatGPT and other similar models are prone to hallucinations in legal research: They have the tendency to fabricate case law for the very proposition the attorney is seeking.

If generative AI is used for research and briefing purposes, lawyers’ ethical duties of competence and diligence obligate them to double-check the existence and accuracy of the citations, as well as the propositions for which they stand. Lawyers can’t cite to these cases without doing their due diligence.

Understandably, there may be instances where a lawyer doesn’t have a membership to a legal research tool, or they or their client can’t afford the charges for such a tool in a particular case. Courts, however, won’t entertain these excuses. There are a number of free resources available to lawyers through state bar associations that can help verify the case law. Using an online search engine may be just as effective.

The responsibility of due diligence is also not one-sided. Opposing counsel should review all of the cases cited by their adversary to make certain that the case law exists and stands for the proposition argued. If the case law doesn’t exist, counsel should contact their adversary immediately and suggest that they withdraw the brief or the relevant citations. If their adversary refuses, counsel must make note of the non-existing case law in their opposition or reply briefs, and call the court’s attention to the issue.

2. Check Court Rules

Many judges have issued standing orders regarding the use of generative AI when drafting court filings. At least two courts prohibit the use of generative AI, while others require lawyers to disclose its use and certify that the AI work product was diligently reviewed by a human for accuracy and applicability.

Before submitting any court filings, lawyers should check their local rules, as well as whether their judge has issued any AI-specific standing orders, and make sure to abide by their requirements.

3. Don’t Double Down

If the court or opposing counsel draws attention to allegedly fabricated cases in a lawyer’s brief, the lawyer should either produce the cited cases, or admit that they improperly relied on generative AI and withdraw the brief.

If the deadline for filing the brief has passed, and withdrawing it would unduly prejudice the client, attorneys should consider filing an amended brief with the fictitious citations removed.

Ethics rules clearly prohibit making false statements to a court and—more generally—any conduct involving dishonesty or misrepresentation. Lawyers who submit court filings with fabricated content must be honest with the court as soon as they discover it—the speed at which this is done may help reduce the likelihood and severity of sanctions.

4. Don’t Blame Subordinates

Lawyers who blame subordinates when the court or opposing counsel questions their use of generative AI risk being disciplined. Not only is it bad form to blame a subordinate, but the federal and state rules of civil procedure specifically state that by signing a court filing, the attorney certifies that they’ve read the document and that it has a justifiable basis in law. An attorney who signs and submits a court filing without such a belief will be penalized.

In addition, supervising attorneys have a professional duty to oversee the work of junior lawyers and non-lawyers, and to ensure that they conform to professional ethics rules. This duty includes establishing “internal policies and procedures designed to provide reasonable assurance that all lawyers in the firm will conform to the Rules.”

Law firms that allow the professional use of generative AI should have policies and procedures in place around its use, including a process by which arguments and case law proposed by generative AI are verified using other credible legal research tools.

Lawyers take an oath to faithfully discharge their duties and to practice with professionalism, integrity, and respect. This promise is especially important as lawyers learn of new ways to utilize generative AI in their practice.

Bloomberg Law subscribers can find related content on our Litigation Intelligence Center and In Focus: Artificial Intelligence pages.

If you’re reading this on the Bloomberg Terminal, please run BLAW OUT <GO> to access the hyperlinked content, or click here to view the web version of this article.

To contact the reporter on this story: Golriz Chrostowski in Arlington, VA at gchrostowski@bloombergindustry.com

To contact the editor responsible for this story: Melissa Heelan at mstanzione@bloomberglaw.com

Learn more about Bloomberg Tax or Log In to keep reading:

See Breaking News in Context

From research to software to news, find what you need to stay ahead.

Already a subscriber?

Log in to keep reading or access research tools and resources.