- Big Law firms piling into the business of making AI safe
- Faegre Drinker poaching expedition bolstered data team
It’s called red-teaming.
As Big Law ratchets up for the generative AI economy, some top firms are pouring resources into the business of stress-testing artificial intelligence models for Corporate America, essentially smashing attorneys and data scientists together so they can make sure the machines don’t do anything that would get companies into legal trouble.
“What we’re doing is building both automated and human attacks, where we go to the large language model and, effectively, try to get it to violate legal standards,” said Danny Tobey, the AI and data analytics chair at DLA Piper.
As companies roll out chatbots and other generative AI-powered tools, this work is essential to ensure that they don’t run afoul of the law in areas like bias, compliance, copyright, and privacy.
It’s a rare peek under the hood at what a law firm is doing in AI. As other firms announce hires for innovation officers or partnerships with tech companies, these “algorithmic audits” are examples of how the legal industry is adapting and using technology to win new business.
The confidential nature of the business makes it hard for outsiders to separate the hype from reality, but some are going all-in, said Daniel Linna, a professor at Northwestern Law who studies law firm innovation.
“There are only a handful of firms that are investing in things like this,” he said.
DLA Piper is known for its global presence and its M&A work, bringing in revenue of $3.7 billion last year. The firm is among those that have taken an ambitious approach to a technology that will likely bring upheaval to traditional ways of doing business, with the winners and losers determined by how well they adapt.
AI was used well before ChatGPT got the world’s attention a year ago, and DLA is just one of the Big Law players putting resources into building practices and technology. Other firms leaning into what AI can do include Cooley, Dentons, Allen & Overy, and Gibson, Dunn & Crutcher.
“Some law firms have been working this for five years or a decade thinking, ‘What do we do as a firm—in data, process, technology—how do we standardize things more, how do we better use data analytics, the knowledge base in the firm?’ Those firms have been doing the difficult work,” Linna said.
AI Comes of Age
A lot has changed since DLA opened its AI practice in 2019.
In those days, AI generally meant, “These narrow-purpose mathematical models that you would set up to tell if someone was going to get sick, or if someone was going to repay a loan, or qualified for insurance,” Tobey said. “And those models were easy to govern because they only did one thing.”
Those machine-learning algorithms—used by credit card companies, banks, and your streaming video provider—still make up most of the AI systems in use. Testing them was relatively simple.
“You put in a bunch of inputs, you look at the outputs, and then you see if there are patterns that are unacceptable in those outputs,” Tobey said.
But generative AI has to be approached differently, he said, since, “There’s an almost infinite variety of prompts and an almost infinite variety of outputs.”
DLA’s approach, Tobey said, has been to build teams of lawyers who are subject matter experts in a particular area—in health, financial services, and insurance, for example—backed by “data scientists who are often also lawyers.”
In March, DLA poached a team of data scientists from Faegre Drinker Biddle & Reath. It was a bet, Tobey said, that building the firm’s AI expertise will make it “the law firm of the future.” The deal brought in the head of the Faegre Drinker team, Bennett Borden—an attorney-data scientist with long legal tech experience who started his career at the CIA—as chief data scientist.
The 100-member DLA team led by Borden and Tobey—a doctor and software entrepreneur who represents clients in life sciences, healthcare, and tech—has worked, for example, with insurance regulators to understand how traditional AI models should be tested. The team is developing entirely new mechanisms to test generative AI systems for trustworthiness and anti-bias, Borden said.
It’s exciting work, he said, because there are no agreed-upon standards.
“Working with clients to come up with best practices can actually help inform what those new regulations should be in conversations with the regulators and legislatures,” Borden said.
This idea of “crash-testing” LLMs is extremely important, said Megan Ma, assistant director of the Stanford Program in Law, Science, and Technology and the Stanford Center for Legal Informatics.
“It provides better oversight and transparency around what might be dangerous use cases and potential reasons behind harmful outputs,” she said.
Big Law Arms Race
Algorithmic testing is just one of the emerging lines of business that Big Law is building as generative AI breaks established business models. Firms are also defending clients in litigation against claims of bias or inaccuracy in their algorithms, and helping clients with their AI solutions—in areas like contract drafting, research, and predictive analytics—as well as governance policies.
DLA’s federal law and policy team in Washington DC, advises clients—including OpenAI’s Sam Altman—and recently registered as a lobbyist for the ChatGPT maker. The team includes Tony Samp, a former US Senate staffer and founding director of the Senate’s AI Working Group.
While law firm leaders largely recognize that generative AI brings existential threats and opportunities, not all of them have placed their bets.
Many have been cautious in their adoption of the technology, with some tentatively embracing tools like ChatGPT on a limited scale, while others have chosen a completely hands-off approach as they wait for critical questions about data security and accuracy to be resolved.
“I get calls every week from big firms that are scared that they’re going to fall behind,” said William Eskridge, a professor at Yale Law School who teaches an annual AI seminar. “This technology is already available to any firm that wants to use it. But firms have to totally rethink the structure of their practices.”
“It’s going to be unevenly distributed,” he said. “The winners are going to be the ones that adapt. It might be Big Law firms, but it might also be teeny firms that succeed and are able to take on the giants. The losers are going to be those that don’t adapt or that bet on the wrong technology.”
To contact the reporter on this story:
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.
