As generative artificial intelligence advances, the legal industry is grappling with whether the technology will augment the work of lawyers—or replace them altogether. Last year, a seasoned appellate advocate conducted an experiment: Could generative AI do a better job than he did arguing before the Supreme Court?
Adam Unikowsky, a partner at Jenner & Block, who has argued before the Supreme Court thirteen times, fed briefs from one of his cases into a large language model and asked the AI to respond to the justices’ questions. He then compared the AI’s answers to his own and wrote about the experiment on his Substack—raising provocative questions about delivering arguments, legal reasoning, and the ability to come up with novel arguments.
We spoke to Unikowsky, as well as James Grimmelmann, a law professor at Cornell, about the test, and to get their thoughts on the implications for lawyers and the legal industry. In this video we explore how lawyers are already using AI in their work, what the technology can and cannot do in the courtroom, and how it could shape the future of legal advocacy.
Video features:
James Grimmelmann, Cornell Law School and Cornell Tech
This transcript was produced by Bloomberg Law Automation.
Captain Kirk: You are a machine.
AI Computer: You will be obliterated.
Narrator: For decades, Hollywood envisioned a future where robots rule the courtroom.
AI Robot: The DNA is a perfect match for Judge Joseph Dredd.
Judge Dredd: It’s a lie!
Robot Judge: Not guilty!
Narrator: Now, one lawyer is pushing the boundaries of legal tech. Can generative A.I. effectively argue a case before the Supreme Court?
Adam Unikowsky: I think it’s one of the greatest inventions in human history. I think it’s printing press part two.
Narrator: That’s Adam Unikowsky, a partner at Jenner & Block and a seasoned appellate court advocate. He’s kind of an A.I. superfan.
Adam Unikowsky: I have a newsletter called Adam’s Legal Newsletter on Substack. I write about artificial intelligence and how it affects lawyering and judging, which is a topic that I’m a little bit obsessed about.
Narrator: Unikowsky has argued before the Supreme Court 13 times. But despite all of that experience, he has a problem he just can’t shake.
Adam Unikowsky: I tend to be a little bit hard on myself. I listen to my oral arguments, obsess over the mistakes I’ve made. After an oral argument, when I sit down, the first thing I think is, damn it, I wish I had answered the question differently, or I wish I had said yes instead of no.
Narrator: Unikowsky had a radical idea.
Adam Unikowsky: I was just curious to see whether artificial intelligence could do better than I did.
Narrator: In other words, Unikowsky wanted to see if he could create an A.I. version of himself. He uploaded briefs from a Supreme Court case that he had argued to a large language model.
Adam Unikowsky: And so all you have to do is just drag the briefs into the window. And it takes a few seconds to educate itself on the briefs. And it’s done.
Narrator: Next, he entered the questions he was asked by the justices during oral argument and asked the A.I. how it would have responded.
Adam Unikowsky: I suggested it act like a well-known Supreme Court advocate.
Narrator: Finally, he created an A.I.-generated voice clone of himself and had it deliver the answers.
A.I. Voice (as Adam Unikowsky): Mr. Chief Justice, I may have pleased the court.
Adam Unikowsky: It sounds amazing. It sounds like a real person. And the answers are clear and coherent and calm. I mean, it’s like a wonderful lawyer.
Narrator: Not only was A.I. Unikowsky a confident, quick-witted and cool-headed public speaker, it could also reason, utilize lateral thinking and come up with unusual and unexpected conclusions.
Adam Unikowsky: One of the questions from Justice Barrett was a very tricky question. And I sort of stuttered my way through the answer.
Justice Barrett: Mr. Unikowsky, when did the state statute of limitations start running or has it?
Adam Unikowsky: So I think that’s an unresolved question, whether it be told. I wish I had done a little bit better.
Narrator: The A.I. responded like a champ.
A.I. Voice (as Adam Unikowsky): Justice Barrett, no, the federal statute of limitations has not expired. For Section 1983 claims, we would apply Alabama’s general personal injury limitations period, which is two years.
Adam Unikowsky: The A.I. responded with a Supreme Court case from a couple of years ago called Reed, which actually was quite relevant to the statute of limitations question. And that was not in the briefs.
Narrator: Unikowsky posted his experiment to his Substack, and it created some buzz, even catching the attention of a Supreme Court justice. He decided was that Claude had done it better than he had, which I thought was, on the one hand, sort of refreshingly humble.
Narrator: Is A.I. Adam ready to take the place of a real-life Harvard educated lawyer with 13 Supreme Court appearances? Unikowsky has done some really interesting experiments with LLMs and they show that they have a lot of promise.
James Grimmelmann: Unikowsky treats the persuasiveness of the outputs as a proxy for correctness. And you can’t judge a legal argument just by how good it sounds.
Narrator: James Grimmelmann is a professor at Cornell Tech and Cornell Law School. He wrote the casebook on Internet law and does research on the legal challenges posed by A.I. He says just because A.I. can generate a convincing oral argument, that doesn’t mean it’s ready for the big leagues.
James Grimmelmann: In other words, delivering a convincing oral argument doesn’t guarantee accuracy. You also need a certain intentionality of getting the facts of the case right, connecting them back to authority.
Narrator: Grimmelmann’s concern is that while a large language model can simulate legal discourse, that doesn’t equate to a genuine comprehension of the underlying legal precedents and societal norms that shape our laws.
James Grimmelmann: Law is a social system. It’s a way of resolving disputes in a way that people will accept as fair, just and legitimate. You can’t just replace it with a computer that emits words.
Narrator: And then there’s another problem with generative A.I., as it is now: It tends to make things up.
James Grimmelmann: LLMs produce outputs that match the patterns in the data they were trained on. Sometimes those extrapolations fit with reality, and other times they misfit reality. We call it a hallucination.
Narrator: Lawyers who have submitted briefs using A.I. without checking the results for accuracy have led to some comical results.
News Clip: A legal brief so full of A.I. slop and make-believe case law that it ticked off the judge till she fined him.
James Grimmelmann: I would particularly worry that in a complex technical area of law, like commercial law, that you could get an A.I. to generate a series of arguments that seem facially plausible, but do something like completely misread part of UCC Article 9, and that that reasoning would be good enough to persuade a generalist judge who doesn’t hear a ton of cases in that area.
James Grimmelmann: You should definitely read what the A.I. produces. If you’re writing a brief, you should check the case. Lawyers should always treat the LLM’s outputs as though it’s trying to get them fired for malpractice.
Narrator: But Unikowsky says that in certain circumstances, generative A.I.'s creativity could actually help devise novel arguments to a client’s benefit.
Adam Unikowsky: I asked Claude how the 21st Amendment, which repealed prohibition, was relevant to the civil rights case that I was arguing. Now, the answer is it has no relevance at all. Right. I mean, the 21st Amendment is just totally irrelevant to the legal issues in the case.
Adam Unikowsky: Claude initially pushed back, but Unikowsky forced the bot to complete the thought experiment. Eventually it caved and it gave an answer. It wasn’t a great answer, but it was really the best answer it could come up with. But the point is, those are answers that came up with very quickly that a human could not have come up with in two seconds.
Narrator: So can we expect an LLL, large language lawyer, to argue in front of the justices anytime soon? The Supreme Court’s current rules do not permit an A.I. lawyer to advocate. The Supreme Court will probably be the last to adapt if this ever comes into vogue.
Narrator: In the meantime, generative A.I. will become another tool to help a lawyer zealously advocate for their client. But like all tools, user beware.
James Grimmelmann: An LLM looks like this useful chainsaw. It’s sitting there. It can help you cut through this thick underbrush of a legal task in front of you, but it could just as easily cut off a limb if you don’t know how to handle it carefully.
To contact the producer on this story:
To contact the senior producer responsible for this story: Andrew Satterat asatter@bloombergindustry.com; to contact the executive producer responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.