Dentons’ Peter Stockburger urges US businesses to keep a close eye on the EU AI Act’s requirements for general-purpose AI use.
The European Parliament, European Council, and European Commission reached consensus Dec. 8 on the final contours of what would be the world’s first comprehensive regulation of artificial intelligence—the EU AI Act.
Although the final text hasn’t been released, it appears the regulation will impact a wide variety of organizations, including those based in the US with little or no presence in the EU.
Coupled with existing regulatory EU laws such as the General Data Protection Regulation, the AIA is poised to have a significant impact on US businesses across sectors and industry.
Risks
A risk-based approach is one of the AIA’s key features. Under the proposed framework, certain AI systems will be banned completely within the EU such as those impacting fundamental rights, while others will be categorized as either presenting a limited or high-risk.
There are several open questions and takeaways for US businesses. Even if the AIA becomes binding law, it will be some time before it’s fully enforced.
The final text of the AIA likely will be published in the Official Journal of the EU in early 2024. The regulation will probably become effective two years after its entry into force, although some provisions may take effect sooner. This means the law may not fully implement until 2025 or 2026—an eternity in the world of AI.
Systems identified as presenting a limited risk would be subject to certain transparency obligations, while those deemed high-risk would face stricter obligations, such as performing risk assessments, adopting certain governance structures, and maintaining a certain level of cybersecurity.
Examples of high-risk systems include certain medical devices, recruitment, HR, and critical infrastructure management such as water, gas, and electricity. The proposed framework also addresses use of general-purpose AI and foundation models.
For high-impact general-purpose AI, organizations will face additional obligations, such as creating rules concerning model evaluations, systemic risk assessments, adversarial testing, reporting on incidents, cybersecurity, and energy-efficiency reporting.
AI Evolution
As AI continues to develop at a dizzying pace, the AIA framework could become stale and not keep pace with AI’s trajectory.
We also don’t know the full extent the AIA will have extraterritorial reach to US businesses that don’t have operations in the EU. Initial drafts indicate that law will apply to providers that place AI systems on the market or put systems into service within the EU, regardless if those providers are established in the EU.
Initial drafts also indicate the AIA will apply to providers and users of AI systems outside the EU where the output produced by the system is used in the EU.
This latter standard could impact a wide range of businesses that produce AI outputs to consumers who are sitting within the EU—arguably a broader standard than even provided under the GDPR.
A provider has been defined in prior drafts as any person or entity that develops an AI system or has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge.
Requirements
US businesses, many of which use general-purpose AI to build products and services, must also keep a close eye on the AIA’s requirements around use of general-purpose AI. The AIA likely will require US businesses to carefully manage risk concerning general-purpose AI, including providing, in some cases, technical documentation and detailed summaries about the content used to train such systems.
This may be difficult to do for many organizations, as there’s not much visibility into how current general-purpose AI is trained or developed. The AIA may further require that larger general-purpose AI that poses a systemic risk will have to undergo additional testing, including model evaluations and adversarial testing.
The AIA may ultimately determine systemic risk based partly on size measurements—otherwise known as floating point operations. This may mean that platforms larger than ChatGPT 3.5 could be viewed as posing systemic risk, potentially impacting businesses relying on such technology as the backbone to their products and services.
Although the final scope and impact of the AIA on US businesses remains an open question, they can prepare now by developing, implementing, and maintaining an AI governance framework.
This will ensure responsible development and deployment of AI across the enterprise while minimizing its risks. It also will help future-proof organizations against any potential compliance issues.
AI governance, while containing differences across businesses, generally includes creating an AI registry of current and potential future use cases; adding a cross-functional AI governance committee; having robust policies, tolerances, standard operating procedures, and transparency mechanisms; and promoting a culture of responsible AI usage.
Organizations that do it right can take advantage of market share and address the growing needs of customers, partners, and regulators, which are increasingly looking to organizations to only develop and deploy AI in a responsible, safe, and ethical manner.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.
Author Information
Peter Stockburger is office managing partner at Dentons’ San Diego, member of the venture technology and emerging growth companies group, and co-lead of the autonomous vehicle practice.
Write for Us: Author Guidelines
Learn more about Bloomberg Law or Log In to keep reading:
Learn About Bloomberg Law
AI-powered legal analytics, workflow tools and premium legal & business news.
Already a subscriber?
Log in to keep reading or access research tools.