AI Risk, Return High Among Corporate Board Priorities (Correct)

Jan. 5, 2026, 4:57 PM UTCUpdated: Jan. 6, 2026, 8:42 PM UTC

As artificial intelligence has moved from a novel and uncertain technology to part of everyday business, corporate boards are stepping in to oversee AI’s rollout and making sure companies are getting their money’s worth.

“It’s now part of a conversation, most likely in every boardroom, in some way, shape or form,” said Brian Kushner, a senior managing director at FTI Consulting who sits on four boards.

Boards are trying to give strategic guidance on adopting a fast-moving technology that carries serious risks. They’re starting to ask how companies are tracking return on investment, and whether they’re getting enough out of what they’re paying for. Compounding the challenge, the board itself often lacks subject matter expertise.

The stakes are high, with hundreds of billions pouring into corporate AI investments while risks for organizations using the tools—including hallucinations, bias, and cyber attacks—grow.

“Beyond simply mitigating risks, boards are increasingly expected to ensure that AI initiatives deliver measurable business value and challenging management not only on ‘how’ AI will be implemented, but also ‘why’ these efforts make sense in the broader context of value creation,” said Beena Ammanath, executive director of Global Deloitte AI Institute.

Adopting AI

AI often ranks at or near the top in surveys of corporate directors’ priorities. In a December survey from the National Association of Corporate Directors, nearly half of directors named AI among the top five issues that would have an impact on their organization in 2026.

Companies adopting AI initially must decide which tools—and what kinds of artificial intelligence—they need.

For example, many are looking for guidance on whether they should use machine learning, which uses large amounts of data to seek patterns for making decisions, or if they need generative AI, which can create new content based on inputs.

Generative AI uses should be taken on more carefully because they carry greater risks, said Rama Ramakrishnan, a professor of the practice of AI and machine learning at MIT Sloan School of Management, who researches how businesses use AI.

Return on Investment

Boards will eventually hold companies accountable for measuring ROI on the big AI investments they’re making—but for now, “boards are not yet fully engaged in the ROI discussion,” said John Rodi, co-leader of KPMG’s Board Leadership Center.

That will likely change quickly as companies go beyond looking at AI as a path to greater efficiency, productivity, and cost savings, to a “force amplifier effect, where companies are using it to expand their market share, to change how services are delivered, to increase revenue,” he said.

Among C-suite and board members who responded to a Deloitte survey published a year ago, 70% projected it would take at least 12 months to “resolve the challenges related to surpassing or achieving their expected ROI from GenAI.”

“Given the long runway, rather than focusing solely on financial outcomes, organizations can examine interim metrics tied to effective integration,” Ammanath said. She pointed to adoption rates, productivity gains, and risk mitigation.

Risk and Governance

The AI risks worrying boards going into 2026 include data security, regulatory compliance, workforce retraining, and employees using AI tools that haven’t been approved by management, Ammanath said.

“Boards that proactively engage by setting clear oversight frameworks, defining accountability, and ensuring management has a plan for governance will be better positioned to capture AI’s benefits while mitigating its risks,” she said.

A survey by Diligent published in September found that only 22% of public directors polled said their board had adopted formal policies around AI governance or oversight, although another 60% said they’d either discussed it, or planned to on an upcoming agenda.

Regulation isn’t generally the top focus for corporate AI governance programs, but it’s an important area, said Guru Sethupathy, head of the AI governance business at the platform AuditBoard.

Parts of the EU’s sweeping AI Act are already in effect. A number of US states, including Colorado and California, had started to pass AI regulation aimed at issues like bias, but President Donald Trump in December issued an executive order aimed at freezing state-level regulation of the technology.

Sethupathy said he expects the divisions among a growing patchwork of AI regulations to be not by state, but by use case. For example, Colorado is regulating AI in insurance, and New York City has an AI hiring law.

Another growing area of concern: AI-driven cyber attacks.

“It’s a matter of when, not if, an attack happens,” Rodi said. “What is an organization’s plan when a cyber attack occurs?”

Ramakrishnan, the MIT business professor, said companies can go awry by being either too reckless or too careful.

“The first mistake is being overly aggressive and essentially deploying the LLM for a task without enough scrutiny,” he said.

On the other extreme, Ramakrishnan said, are people “who are so apprehensive about the hallucinations and the error rates, they literally are paralyzed. They don’t want to do anything.”

Some companies get stuck when processes for approving AI use cases is too slow, said Alissa Lugo, senior director and analyst at Gartner’s legal and compliance group.

“They kind of got the beginning stage set up, but the execution of it, where they can push the envelope more in terms of how they’re using AI, is getting bogged down,” Lugo said.

Board Expertise

An overarching challenge for boards trying to stay on top of the rapidly changing AI landscape is that they often lack members who are AI experts themselves.

Figuring out how to get AI questions to the right committee is a big challenge for boards, Kushner said.

“The knowledge gap is significant,” Ammanath said.

Some boards try to recruit AI specialists to fill seats, but those people can be hard to find.

An alternative: “Upskilling” the existing board with AI literacy education, bringing in external experts, and holding workshops, Ammanath said.

To contact the reporters on this story: Isabel Gottlieb in Washington at igottlieb@bloombergindustry.com; Kaustuv Basu in Washington at kbasu@bloombergindustry.com

To contact the editors responsible for this story: Catalina Camia at ccamia@bloombergindustry.com; Jeff Harrington at jharrington@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

See Breaking News in Context

Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.

Already a subscriber?

Log in to keep reading or access research tools and resources.