The US Tax Court is starting to weigh its own guardrails around artificial intelligence misuse amid the additional thorny challenges of a high proportion of self-represented parties and the need to shield sensitive taxpayer information.
A new disciplinary system must account for the fact that over three-quarters of the court’s cases are pro se, said
“The Tax Court wants to proceed gingerly,” said Holmes, who joined the court in 2003, in an interview. “If a pro se person does it, it’s not really a violation of professional ethics because they’re not lawyers, and we would have to figure out some other way of determining what the appropriate sanction is.”
The judge emphasized in that case, Clinco v. Commissioner, that submitting briefs with fictitious case law is unacceptable.
Holmes cited US Supreme Court Chief Justice John Roberts’ “advice to lawyers who write briefs with citations of nonexistent cases: ‘Always a bad idea.’”
Clinco serves as a stern warning against unethical AI use and indicates the court is considering a process to punish those who don’t proceed with caution, Holmes said. A good starting point is to examine how other courts increasingly issue sanctions, he said.
Arrow in Quiver
Right now, IRC Section 6673 is the most prominent Tax Code provision available to penalize improper AI use, Holmes said. The provision allows the Tax Court to impose penalties up to $25,000 on those who bring frivolous or groundless arguments, or delay proceedings.
Gilbert Rothenberg, former chief of the Appellate Section of the Justice Department’s Tax Division, said sanctions against taxpayers and attorneys successfully stalled the frequency of frivolous arguments during his government career.
“That was certainly an arrow in the quiver,” Rothenberg said. “To the extent the use of AI is improper, you could make a legitimate analogy there.”
For example, the US Court of Appeals for the Tenth Circuit issued a $1,000 sanction to a Maryland attorney after she admitted using a generative AI tool to draft a brief citing multiple nonexistent cases. The court also referred her to attorney disciplinary authorities because of her “reckless” actions in just one of hundreds of examples.
Meanwhile, some sanctions have been harsher, irreparably harming a lawyer’s standing, said Ralph Artigliere, a retired Florida Circuit Court judge who now advocates for responsible integration of AI into the legal profession.
Such a move occurred in Johnson v. Dunn, resulting in the disqualification of three attorneys for submitting a motion with hallucinated case citations.
“To me, that’s almost a career ending deal for them,” Artigliere said. “Your reputation with judges and other lawyers is on the line. I mean, that’s a big deal.”
‘Feeling Our Way’
Issuing sanctions isn’t as straightforward in Tax Court, where Holmes said he’s seen fewer than a dozen instances of hallucinatory cases cited and hasn’t yet issued any sanctions himself.
The court has a disciplinary committee governing attorney ethics as well as a rules committee, so any potential guidance for self-represented taxpayers could come from those bodies.
But the judges on the court need to confer and build consensus first, he said.
“The rules committee might, for instance, amend the rules to require a certification that AI has not been used,” Holmes said. “But I’m not sure that would make any sense since there are perfectly legitimate uses of AI. We’re feeling our way in the dark still about where to go on this.”
Rothenberg said using AI improperly to support a legitimate argument isn’t the same as making a frivolous argument, and Section 6673 may not cover all mistakes that AI can conceivably create.
While more pro se litigants may increasingly rely on AI—which would spur more case hallucinations—Rothenberg expects the court will be more inclined to allow self-represented taxpayers to remake their argument, rather than issuing sanctions.
“It might create additional work for the clerks that work for the judges,” he said. “But it kind of goes with the territory.”
Protecting Taxpayers
Pro se taxpayers also need extra protection against AI-related misuse of their personal information contained in their tax documents, as well as the potential for IRS attorneys to use AI-generated false cases in litigation.
In Khoja v. Commissioner, for instance, Judge Jennifer Siegel ordered a hearing to discuss an IRS motion that included a citation to a non-existent case.
During that hearing March 11, Siegel questioned the IRS attorney who signed the motion about apparent AI use and what safeguards the office has in place, said Katherine Jordan, tax controversy counsel at Miller & Chevalier Chtd.
“The IRS’s use of AI in Khoja should serve as the canary in the coal mine, warning others of the dangers of using AI without meaningful human review,” Jordan said in a statement.
It’s reasonable to expect many pro se litigants will use AI to prepare or respond to filings, but the Tax Court must formulate rules that discourage irresponsible AI use by both practitioners and pro se taxpayers, she said.
Going forward, any large language model AI discipline systems also must protect against leaking private taxpayer information, Holmes said.
The court is always careful about the handling of confidential information to protect taxpayer privacy and prevent the heightened chance of identify theft, but now millions of pages of private information located in its database are at risk from improper AI use, he said.
“I’d say in almost 100% of the cases, Social Security numbers will pop up, bank account numbers will pop up, the names and identifying information of minor children will pop up,” Holmes said.
To contact the reporter on this story:
To contact the editors responsible for this story:
Learn more about Bloomberg Tax or Log In to keep reading:
See Breaking News in Context
From research to software to news, find what you need to stay ahead.
Already a subscriber?
Log in to keep reading or access research tools and resources.