Jenner & Block’s Michelle Kallen and Haley Tuchman analyze a more prominent role for courts and state bars in AI regulation as law practice adopts more AI tools.
Artificial intelligence has propelled a rapid shift as generative AI tools gain traction. AI-driven research, analysis, and drafting platforms that produce human-like content have potential to revolutionize the practice of law.
As practitioners begin to understand and embrace the benefits of these tools, the task of regulating generative AI’s use has landed at the top of many courts’ to-do lists. Much like the technology they seek to regulate, courts’ efforts to navigate use of AI in legal settings is rapidly developing.
Federal Circuit Courts
Appeals courts nationwide have convened committees to examine emerging AI technologies. The Fifth Circuit Court of Appeals led the way last November when it became the first circuit to publish a proposed rule governing use of AI.
The Fifth Circuit’s proposal would require filers to certify generative AI wasn’t used to draft documents being filed. And to the extent an AI program was used, a party must certify that the filing’s text—including its citations and legal analysis—was reviewed for accuracy. The consequences of failing to do so include possible sanctions and striking the document.
Unsurprisingly, the proposal has sparked opposition from practitioners. Some raised concerns about which technologies would constitute “artificial intelligence program[s]” under the rule.
Common tools such as web search functions or prevalent legal research platforms use AI, often without a practitioner’s awareness. Other critics highlight existing rules—which already require practitioners to review court filings for accuracy—to critique the Fifth Circuit’s proposal as unnecessary.
The Third and Ninth Circuits followed the Fifth Circuit’s lead. Last month, Chief Judges Michael Chagares and Mary Murguia convened their own committees, neither of which published any formal guidance yet.
The Fourth Circuit took a different approach, asking its information technology staff to draft a report and recommendations analyzing use of AI. There have also been rumblings from the Second Circuit rules committee, but the court has yet to take any formal steps.
Federal District Courts
District-wide rules have yet to emerge in almost all the country’s 94 federal districts. The Eastern District of Texas is an exception, adopting a rule last December that requires attorneys review and verify AI-generated content included in filings. Michigan’s Eastern District also has a pending proposal that would require the same, in addition to an obligation to disclose when AI has been used to compose or draft a filing.
Many judges across the country have issued standing orders to regulate parties’ behavior in their courtrooms. Judge Brantley Starr in the Northern District of Texas was first, requiring litigants to certify their filings either didn’t use AI or that a human checked their accuracy.
District and magistrate judges from the Eastern District of Pennsylvania to the Northern District of California issued similar standing orders.
States
State courts are also looking to convene committees to examine legal and ethical implications of AI use. In September, New Jersey’s Supreme Court convened a 31-member committee, drawing its members from inside and outside the judiciary.
State bars will also play an important role. As the ethical regulators of the profession, clear guidance on AI use will benefit both practitioners and clients.
California became the first state to issue an ethics advisory opinion in November, with Florida doing the same last month. The California Bar suggested lawyers should consider disclosing any intent to use generative AI to their clients.
The Florida Bar went one step further, suggesting lawyers should obtain permission from clients before disclosing confidential client information to third-party AI platforms. AI use is also under review by state bars in New York, Texas, Illinois, New Jersey, Minnesota, and Kentucky, with more guidance expected soon.
Judiciary
Apart from lawyers’ use of generative AI tools, courts might seek to regulate themselves. Law clerks and judges may find AI helpful when sifting through rising caseloads. In his 2023 year-end report, Chief Justice John Roberts even predicted that “judicial work—particularly at the trial level—will be significantly affected by AI.”
So far, the Michigan Bar’s Standing Committee on Judicial Ethics has issued guidance directing its state court judges who use AI tools when deciding cases to “take reasonable steps to ensure” those tools “are used properly.”
Generative AI has power to transform the way attorneys approach their work. Court rules and state bar opinions governing use of this emerging technology reflect the need for a balanced and ethical approach.
Many courts have embraced this newfound innovation and maintained fairness, transparency, and accountability within the confines of the courtroom. By thoughtfully navigating these challenges, practitioners and courts can harness the benefits of generative AI while upholding the high standards of the legal profession.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.
Author Information
Michelle Kallen is partner at Jenner & Block with focus on complex matters before federal and state appellate courts. She is former solicitor general of Virginia.
Haley Tuchman is an associate at Jenner & Block specializing in intellectual property and technology litigation.
Write for Us: Author Guidelines
To contact the editors responsible for this story:
Learn more about Bloomberg Tax or Log In to keep reading:
Learn About Bloomberg Tax
From research to software to news, find what you need to stay ahead.
Already a subscriber?
Log in to keep reading or access research tools.