The Trump administration’s framework to help the US “win” the artificial intelligence race is welcomed direction from the federal government to foster innovation while developing standards for AI’s many applications.
With the goal to ensure “America sets the technological gold standard worldwide,” Congress and regulators must follow through, especially when it comes to taxes.
I’ve seen firsthand how quickly technology can outpace regulation, from my time representing Pennsylvania’s Sixth District in Congress to my work today advising companies that are trying to navigate complex public policy landscapes.
AI-powered tools that are marketed to taxpayers and businesses promise convenience, speed, and savings. They also carry serious risks. There are virtually no federal guardrails to ensure accuracy, protect privacy, or hold companies accountable for errors. That must change.
As a former legislator, I understand how difficult it can be to craft effective regulations for emerging technologies. I also know that waiting too long leads to confusion and harm. We can’t afford to let the tax code become a playground for untested algorithms and unregulated actors.
Take the research and development tax credit. It has been instrumental in supporting manufacturers and tech firms, but receiving it requires more than checking a box. Determining eligibility requires professional judgment, nuanced interpretation of IRS rules, and often in-depth interviews to verify qualified activities. AI can assist in this process, but it can’t replace it.
Yet we’re seeing new firms spring up overnight, built entirely on AI systems that attempt to file claims and secure credit without transparency. Companies such as SPRX, MainStreet, and Neo.Tax have grown using automated models to target R&D filings—in many cases charging fees based on the filings they generate.
It’s a business model that could lead to fraudulent filings. This puts US companies at serious legal and financial risk.
I’ve long supported innovation and the responsible use of technology. But the absence of regulatory oversight here is alarming. Taxpayers deserve to know what tools are being used to handle their filings. CPAs deserve clarity on best practices. And companies deserve protection from misleading or unethical providers.
Congress and US regulators must step in and bring much-needed clarity in several areas.
First, they should establish clear, unambiguous standards for when and how AI can be used in tax preparations. These standards will require input from numerous stakeholders in both the government and private sectors.
To start, the IRS should hold a roundtable with private sector representatives so stakeholders can explain how they’re using AI, where they believe AI has limitations, and how they protect their clients from overreliance. Their input would provide real-world experience and help federal regulators assess where their focus is most needed.
Standards for AI in tax preparation would likely involve not just the IRS, but also federal agencies responsible for consumer protection, financial oversight, fraud prevention, and technology standards, all working together to set guardrails.
This type of coordination would ensure all relevant agencies have a voice in establishing standards. Ideally, these agencies would keep adaptability top of mind to keep pace with the quick-changing nature of AI.
One particularly important standard is mandatory disclosure when AI tools are involved with calculating or submitting returns. It is imperative that both the IRS and taxpayers know if and when AI was used at any point in the filing process. For example, if an error is found in a filing, regulators and the submission filer should know where and how it came to be.
Second, there must be a framework for certifying or auditing firms that rely on AI. Countless established and emerging companies exist in the space, and lack of a process to vet and certify them creates a Wild West scenario and the potential for bad or incompetent actors to harm taxpayers and businesses.
This requires creating meaningful accountability mechanisms to protect taxpayers and businesses if those tools get it wrong. The goal isn’t to hold back innovation; it’s to ensure it operates within a system that prioritizes accuracy, ethics, and the public good.
We’ve done this before. In the financial services and healthcare sectors, we’ve put rules in place to ensure emerging technologies operate responsibly. There’s no reason we can’t do the same here.
The IRS is beginning to explore AI, and that’s a step in the right direction, but they can’t do it alone. Lawmakers must step up and define the rules of the road before it’s too late.
Taxes don’t leave room for guesswork. The stakes are simply too high. If we fail to act, we risk undermining the integrity of our tax system—and the trust that underpins it.
AI has tremendous potential, but it must be guided by clear policy, professional judgment, and commonsense safeguards. It’s time for Congress to get to work.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners.
Author Information
Ryan Costello represented Pennsylvania’s Sixth Congressional District in the US House of Representatives and currently leads the consulting firm Ryan Costello Strategies.
Write for Us: Author Guidelines
To contact the editors responsible for this story:
Learn more about Bloomberg Tax or Log In to keep reading:
Learn About Bloomberg Tax
From research to software to news, find what you need to stay ahead.
Already a subscriber?
Log in to keep reading or access research tools.