The EU Approach to Artificial Intelligence in Tax Administrations

April 19, 2023, 7:00 AM UTC

Tax administrations are increasingly seeking to use artificial intelligence to become more efficient in their work. The European Commission in April 2021 proposed rules and actions aimed at turning Europe into “the global hub for trustworthy AI.” A highly relevant report, “Tax Administration 2022,” by the OECD Forum on Tax Administration also appeared in June 2022.

This article will analyze AI technology applications in tax authorities, the uses and risks, and the rules proposed by the European Commission. It will also go on to formulate some ideas about the importance of respecting the rights and guarantees of taxpayers in the implementation of AI.

Tax Administrations and AI: Uses and Risks

There are now cases of concrete application of AI in the different functions of tax authorities—information and assistance, control, collection, and recovery—as well as in customs authorities. As tax authorities become more comfortable with managing large data sets and computing power increases, the use of AI and machine learning is opening up new approaches in risk management.

Responsible use of AI goes beyond not engaging in illegal practices through its use. It is about using AI in a way that doesn’t violate minorities, avoids human rights violations, and doesn’t lead to the widening of the existing inequality gap, either intentionally or accidentally.

There are at least four intrinsic risk types to consider in the planning, programming, and implementation stages for responsible use of AI: fairness and inclusiveness, system reliability and security, user data privacy and security, and transparency and accountability.

The European Approach to Trustworthy AI

For several years, the European Commission has been facilitating and enhancing cooperation on AI across the EU to boost its competitiveness and ensure trust based on EU values.

Following the publication of the European strategy on AI in 2018, the High-Level Expert Group on Artificial Intelligence developed guidelines for trustworthy AI in 2019 and an assessment list for trustworthy AI in 2020. The first coordinated plan on AI had already been published in December 2018 as a joint commitment with member states.

The commission’s white paper on AI, published in 2020, set out a clear vision for AI in Europe: an ecosystem of excellence and trust, setting the scene for the current proposal.

The proposed new rules and actions published in April 2021 will be applied directly in the same way across all member states based on a future-proof definition of AI. They follow a risk-based approach:

Unacceptable risk. AI systems considered a clear threat to the safety, livelihoods and rights of people will be banned. This includes AI systems or applications that manipulate human behavior to circumvent users’ free will; for example, toys using voice assistance encouraging dangerous behavior by minors, and systems that allow “social scoring” by governments.

High-risk. AI systems identified as high-risk include AI technology used in:

  • Critical infrastructures (such as transport),
  • Educational or vocational training (such as scoring of exams),
  • Safety components of products (such as AI application in robot-assisted surgery),
  • Employment (such as CV-sorting software for recruitment procedures), and
  • Essential private and public services (such as credit scoring denying citizens opportunity to obtain a loan).

High-risk AI systems will be subject to strict obligations before they can be put on the market.

Limited risk. AI systems with specific transparency obligations: When using AI systems such as chatbots, users should be made aware that they are interacting with a machine so they can take an informed decision to continue or step back.

Minimal risk. The legal proposal allows the free use of applications such as AI-enabled video games or spam filters. The vast majority of AI systems fall into this category. The draft rules don’t intervene here, as these AI systems represent only minimal or no risk for citizens’ rights or safety.

Final Thoughts

AI presents a scenario of disruptive changes for tax authorities between the medium and long term. Digital transformation involves not only structural but also cultural changes. Authorities need to understand how technology impacts their functions and develop the skills to use it efficiently.

As we now see, AI also involves risks, for which specific regulation is required, especially for the adequate protection of the rights of taxpayers.

Laws in general aren’t yet adapted to the consequences of using AI in the area of relations between tax authorities and taxpayers and tax application procedures. For this reason, the courts are called upon to play a fundamental role in the effective control of the use of algorithms, while on occasion having to interpret when, and in what way, existing legal principles are applicable in this new scope, even with appropriate modifications.

It is vital that in the design, development, application and audit of algorithms the fundamental rights of citizens are always respected, thus avoiding all biases or discrimination that their use may produce.

It is clear that AI doesn’t work by itself but depends on how it is trained or programmed by humans, which is why they are and will be responsible for its proper functioning.

Rules such as those discussed above in the European Commission approach to trustworthy AI are needed. It will be important to analyze the specific application of AI in light of the rules established by the commission, or those that exist or are issued in an individual country. Governments must collaborate with the different actors involved to ensure the proper use of AI in an ethical and equitable way, always seeking to ensure that information technology is an integrated element with the human resources of a tax authority.

AI should increase or complement human capabilities so that people can add value to their tasks while improving the quality and efficiency of the public function for citizens.

This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.

Author Information

Alfredo Collosa is a consultant and tutor in tax administration at the Inter-American Center of Tax Administrations, as well as a professor, investigator, lecturer, and author of books and publications. He holds an Official Masters in Public Finance and Tax Administration (UNED-IEF).

We’d love to hear your smart, original take: Write for us.

Learn more about Bloomberg Tax or Log In to keep reading:

See Breaking News in Context

From research to software to news, find what you need to stay ahead.

Already a subscriber?

Log in to keep reading or access research tools and resources.