Trump’s Fight Over AI Rules Requires a Digital Librarian: Werfel

December 3, 2025, 9:30 AM UTC

Today’s most important idea for artificial intelligence may be one that won’t make anyone any money. To understand why, we need to remember a time before algorithms.

When I was a kid, the most powerful search engine I knew lived on a wooden shelf in our basement: the brown and gold volumes of the World Book encyclopedia, built on an editorial process anchored in objectivity. But today’s students don’t head to a bookshelf.

They open a browser or an app and type a question into an AI model. Instead of a vetted encyclopedia, they encounter an array of corporate-owned algorithms—each trained on different data, shaped by different incentives, and capable of giving different answers. In an era when a single question can yield a dozen conflicting algorithmic “truths,” the need for transparency has never been greater.

This challenge is becoming more urgent. A draft executive order now circulating in Washington suggests the federal government may soon weigh in on how standards for AI transparency and bias are set.

With potential new requirements on the horizon, we are entering a critical debate about who shapes the information Americans receive and whether a clear path to neutrality will exist.

Algorithm Owner’s Answers

There’s a striking example of why this debate matters. The AI answer-engine on Truth Social reportedly produced responses that contradicted the platform owner’s public statements.

The platform previously stated that it would review user feedback and refine the beta tool, though it hasn’t indicated whether those refinements would address answers that diverge from the Trump administration’s positions.

The episode was a headline curiosity, but it also was a reminder that algorithms aren’t inherently neutral. They can be steered, tuned, aligned, or realigned depending on who owns the bookshelf.

The encyclopedia in my basement didn’t do that.

Fractured Information Landscape

As educators consider how large language models will enter the classroom, much as calculators eventually entered standardized math testing, they face a far more complicated reality. Unlike calculators, AI models don’t produce uniform answers to common questions. They draw from different sources, reflect different perspectives, and are guided by different underlying values.

When I opened the World Book entry on the Boston Tea Party, I trusted I was reading something rooted in evidence, not algorithmic preference. Those volumes weren’t perfect, but they reflected a process: editors who checked claims, scholars who debated interpretations, and a commitment, however human and fallible, to neutrality and verification.

That process created a baseline of trust that today’s students no longer can assume when an answer comes from a corporate-owned algorithm rather than from a transparent, peer reviewed editorial framework—one with widely varying guardrails and few of the editorial procedures that shaped the encyclopedias of my childhood.

Public Institutions’ Challenge

Now grown up, and having recently served as IRS commissioner, I see these challenges from a vastly different vantage point than the kid in the basement holding a World Book volume did.

In conversations with other tax administration leaders around the world, we explored whether to add large language models to our public-facing government websites to help taxpayers get clearer, more conversational answers to their queries. The promise was obvious—simpler explanations, faster responses, and fewer barriers for people trying to navigate complex rules.

But the risks are just as clear. If we don’t train these models with extraordinary care, or if we can’t validate that they’ll remain stubbornly neutral, we risk creating tools that drift into expansive or inaccurate tax advice.

That could mislead taxpayers, undermine fairness, and erode trust in the system we are responsible for protecting. By the time I stepped down as commissioner, the consensus was clear. The idea held real potential, but we weren’t ready for it yet.

Without the equivalent of those old editorial guardrails—oversight, verification, and clear boundaries—the outcomes could be as messy as the tax code itself. For public institutions, neutrality and explainability aren’t optional; they’re mandatory.

An Algorithmic Librarian

In a moment defined by AI breakthroughs and billion-dollar valuations, I’m proposing a digital successor to the encyclopedia on my childhood shelf. We need an independent, nonprofit, peer-reviewed AI model. Its purpose should be clarity, transparency, and pluralism, not persuasion or profit.

Such a model should cite credible sources, reflect diverse viewpoints, explain its reasoning, undergo independent auditing, and be insulated from ideological or commercial pressures. Think of it as a “public good” model of AI that educators, students, and public institutions can trust.

The metaphor is simple. In my basement, the volumes didn’t change their answers depending on who opened them. Their credibility came from process, peer review, and transparency.

Today, we need a digital equivalent: a trusted librarian in an age of competing algorithms. Rather than a gatekeeper, we should have a guide that helps learners and citizens navigate a world in which answers increasingly depend on which model you ask.

If we act, we can build something that stands apart from the commercial AI race: a neutral, transparent, evidence-based source of knowledge worthy of the trust once placed in those heavy volumes in a quiet New York basement.

This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners.

Author Information

Danny Werfel was IRS commissioner from 2023 to 2025 and is now executive in residence at the Johns Hopkins School of Government and Policy and a distinguished fellow at the Polis Center for Politics at Duke University.

Write for Us: Author Guidelines

To contact the editors responsible for this story: Rebecca Baker at rbaker@bloombergindustry.com; Melanie Cohen at mcohen@bloombergindustry.com

Learn more about Bloomberg Tax or Log In to keep reading:

See Breaking News in Context

From research to software to news, find what you need to stay ahead.

Already a subscriber?

Log in to keep reading or access research tools and resources.