Sure, You’ve Got an AI Policy, But Is Everyone Following It?

May 29, 2024, 9:00 AM UTC

Seemingly every law firm or business now has a safe AI-use policy, but the data suggest a lot of employees are taking chances with the enticing technology.

Most policies cover the same ground: never upload data—client, firm, or personal—without express consent; tell clients when you use generative AI tools; and never, ever trust AI-generated output that hasn’t been thoroughly vetted by humans, because of the technology’s tendency to “hallucinate” bogus answers.

All well and good. But the question is whether those policies will pass the human nature test.

Generative AI models like OpenAI’s ChatGPT, Google’s Gemini, and xAI’s Grok can write, do document research and summarize complex data hundreds of times faster than humans. And there lies the issue for the real world: Once employees get used to such tools—and already, a generation of students is turning to ChatGPT for its homework—it might be next to impossible to put the genie back in the bottle. Regardless of corporate policies.

AI is already an accepted fact for many lawyers, corporate warriors, and government employees. Bloomberg Law’s latest Legal Technology and Operations Survey found that 46% of legal workers had used AI for work, most commonly for research and drafting communications.

Microsoft and LinkedIn found that three-quarters of knowledge workers are using AI on the job—and that 78% of AI users “are bringing their own AI tools to work.” Revealingly, more than half of those using AI were reluctant to admit using it for their work. That’s where the human nature part comes into play.

The Rise of ‘Shadow IT’

The term of art here is “shadow IT”—the tendency of employees to ignore policy and hook their personal devices and apps into the company network, with the possibility of serious consequences. Some have taken to calling the newest incarnation “shadow AI.”

Erica Wilson, vice chair of the AI team at Fisher Phillips in Pittsburgh, told me that the statistics probably understate the degree to which lawyers are already using AI, though that doesn’t mean they’re using it in a nefarious way.

“You have to assume someone in your company is playing around with it,” she said. “The technology is too useful and too cool to expect people not to be trying it.”

“We’d be naïve to think that someone’s going to look at it and not think, ‘This could speed up my review.’”

The danger of being too slow to adopt the technology is usually presented as a competitive risk, not an internal security risk.

“Years ago, IT didn’t get onboard quickly enough with delivering resources to match business needs,” explains Nicholas Brackney, Dell Technologies generative AI marketing lead. “And that’s how ‘shadow IT’ started.”

It’s important that doesn’t happen with generative AI, he said.

“We have an opportunity to right the ship now to empower all the goodness from this major technology inflection point.”

‘Burden of Menial Work’

Brackney says the biggest danger isn’t deliberate malfeasance.

“It’s people using the tools correctly to do their jobs faster and be more productive, but putting data they didn’t know was sensitive into tools they didn’t know were insecure.”

Steve Fridakis, chief information security officer for Oracle Health, said some CISOs try to escape the shadow IT problem by blocking access to AI applications from internal networks, but that is “quite a naïve approach, as these controls can easily be bypassed simply by reaching to your phone.”

“I see AI as an opportunity to ease the burden of menial work,” Fridakis said. “So I’m from the school of embracing it and educating my colleagues about its safe use.”

As I thought about the problem, I tried to imagine a plausible scenario. Maybe a bunch of exhausted Big Law associates working on a merger deal at 8 p.m. on a Saturday in New York City. They’ve toiled for weeks, and they’re desperate to get away for dinner with friends. Maybe they’d be tempted to do the bad thing and dump a giant document into their personal laptops to work around firm policy and get the heck out of there?

I ran that script by a Wall Street banker and former deals lawyer, but he said that wasn’t very likely. Such people are self-selecting, disciplined overachievers who don’t necessarily see those late hours as being as awful as normies do.

The more likely risk, he and others agreed, is that lawyers working on a deal might simply feed a lot of questions into a public model, inadvertently training it on their case. Maybe opposing counsel could engage in some three-level chess with prompts to the same model designed to tease out their adversary’s strategy.

No one really knows what we’re training these models to do. The risk is literally incalculable, Wilson notes: It can’t be calculated.

Transparency, openness, and a pro-technology spirit that suggests management is doing its best to adopt new technology can help keep AI users from hiding in the shadows. One short-term solution might be to build sandboxes in which the most AI-curious can experiment under supervision.

Employers need to communicate specifically what the risks are, not just for the company, but also for employees’ own reputations. They also need to think about what should happen to rule breakers.

Wilson said she mainly uses tools like Microsoft‘s Copilot and ChatGPT to structure presentations, something that carries little risk beyond making her look dumb if she doesn’t check her work.

“I used confidential client data,” is significantly more serious than, “I made a presentation with a bogus statistic,” she said: “I put the company’s reputation at risk,’ vs. ‘I didn’t check my work.”

Real Talk on AI is an occasional column exploring artificial intelligence and the changing workplace. If you have a column idea email djolly@bloombergindustry.com, subject line: real talk.

To contact the reporter on this story: David Jolly in Washington D.C. at djolly@bloombergindustry.com

To contact the editor responsible for this story: Gregory Henderson at ghenderson@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

See Breaking News in Context

Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.

Already a subscriber?

Log in to keep reading or access research tools and resources.