Kamales Lardi of Lardi & Partner Consulting looks at the EU’s proposed legislation for AI systems and assesses the potential impact for companies located in or serving the EU market.
The upcoming EU AI Act, set to take effect at the end of this year, will revolutionize the way artificial intelligence technologies are developed, deployed, and applied globally. Initially proposed in April 2021, the AI Act is currently going through the legislative process and would be the first comprehensive legal framework for technology outside of China.
AI-based solutions have been making headlines in recent months, triggered by the public launch of ChatGPT last November. Since then, the accelerated adoption of AI-based solutions across industries has been astounding. Big tech companies such as Microsoft Corp. and Google owner Alphabet, Inc. have solidified their commitment to AI-based products and offerings, potentially pivoting their business models to accommodate the rapid shift. AI application is expected to see an annual growth rate of 37.3% from 2023 to 2030.
This rapid development has triggered concerns about extended application of AI across industries and its potential misuse or harm, as a result. Generative AI systems bring added concerns about the widespread dissemination of fake news and harmful speech, as well as the possible exploitation of intellectual property.
As AI is becoming increasingly integrated into our daily lives, the EU’s proposed AI Act seeks to ensure that this technology is used safely and responsibly, while promoting innovation and economic growth in the region.
The Measures
The EU AI Act focuses on providing a regulatory framework for AI systems based on risk categories. Under the act, systems with deemed unacceptable risks will face an outright ban, while high-risk AI systems will be subject to a strict pre-market conformity assessment and ongoing post-market monitoring to ensure their safety and compliance with ethical standards.
The act defines high-risk applications as those that pose a significant risk to the health, safety, and fundamental rights of individuals, such as biometric identification, critical infrastructure management, and predictive policing.
Additionally, the act includes obligations for transparency and human oversight, requiring AI systems to be transparent about their capabilities, limitations, and potential biases.
The Impact
The EU AI Act impacts companies, as well as society in general. Companies that develop and deploy AI systems, as well as companies that use it both within and outside of the EU will have to comply with strict regulations. This includes ensuring the transparency and explainability of algorithms and decision processes, taking measures to prevent bias and discrimination, as well as extensive record-keeping and reporting incidents of non-compliance.
Although this approach will calm fears and help increase trust with consumers and stakeholders, the flip side will be a significant increase in AI development and implementation costs. Companies potentially will need to go through an extensive application and certification process to demonstrate compliance with the framework. The approach that will be taken to implement and enforce the measures has not been made clear, and the impact on the EU market remains to be seen.
The EU AI Act, and other proposed regional governance frameworks, focus on the technology, its processes and data, as well as its applications. However, the most innovative technologies in the world are only as good as the people who create them. A critical element to consider would be the governance and accountability of leaders and management teams in these technology companies, who set the tone for acceptable practices in building and deploying revolutionary technologies such as generative AI.
The impact of the AI Act on companies located in or serving the EU market could be significant. Companies will need to ensure their AI-based products and offerings, as well as any application of AI-based solutions in their business environment, are compliant with a range of requirements defined by the act before they are enforced.
High-risk AI systems must be secure, transparent, explainable, fair, and accountable. The systems also must comply with existing laws, such as the General Data Protection Regulation for data privacy and protection, and must not discriminate against individuals or create unfair commercial outcomes.
Establishing robust post-market monitoring and competent, skilled human oversight will be a crucial part of the operational framework. These requirements are expected to have a critical impact on EU-based companies, potentially affecting their competitiveness in the global business landscape and stifling innovation.
Smaller companies and startups in the AI space with limited resources may struggle to comply with these exhaustive requirements, limiting their potential for growth and scale and staying competitive in the global market. This may further the consolidation of the tech industry and concentrate power in the hands of a few companies with the resources to comply with stringent requirements. Additionally, companies will need to reassess their use of data, and how they access data from other sources.
Being Prepared
The EU AI Act may only take effect in 2025 or later, as companies will need to comply by then. However, it is important for companies to begin early preparations to reduce potential reputational and financial risks. For example, they should start building an inventory of AI systems that are in use or planned for the business, and conduct a risk assessment based on internal policies and current regulatory environment such as the GDPR.
Additionally, companies should develop potential mitigation actions that could be put in place to address the risks, as well as a governance framework that could be established to identify and address them.
The EU AI Act will be a challenge for most companies to address; however, early preparation will ensure that they will be able to efficiently set up a future-proof environment to develop and utilize AI-based solutions.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.
Author Information
Kamales Lardi is CEO of Lardi & Partner Consulting. She is a strategic thinker in digital and business transformation, and author of “The Human Side Of Digital Business Transformation.’’
We’d love to hear your smart, original take: Write for us.
Learn more about Bloomberg Tax or Log In to keep reading:
See Breaking News in Context
From research to software to news, find what you need to stay ahead.
Already a subscriber?
Log in to keep reading or access research tools and resources.