With AI capturing lawyers’ attention at such a rapid rate, Bloomberg Law’s legal analysts are assessing the real-world impact that AI technology and tools are having on lawyers and the law. This analysis piece is one of five featured in a new report, Artificial Intelligence: The Impact on the Legal Industry, currently available to subscribers on the In Focus: Artificial Intelligence page, and soon to be released to the public.
First privacy, now artificial intelligence. In the absence of comprehensive federal legislation on AI, US states are taking the lead, enacting laws that echo the European Union’s efforts to provide protections to consumers and other individuals in the face of a rapidly growing technology.
If Colorado’s AI law takes effect as planned, it could be the first ripple in the water that flows in a series of US jurisdictions passing comprehensive statutes of their own. This flow isn’t new to the states: It mimics the pattern set by California’s landmark consumer privacy act.
The Ripple Effect
California enacted the California Consumer Privacy Act (CCPA) in 2018, and two years later, it went into effect. Then in 2023, the California Privacy Rights Act went into effect, amending the CCPA. The first of its kind in the United States, enacted on the heels of the EU’s General Data Protection Regulation (GDPR) coming into force in May 2018, the CCPA (and, later, the CPRA) secures privacy rights for California consumers while also guiding businesses on data retention policies.
The CCPA began a ripple effect in the US, both by being the first privacy act of any US jurisdiction and by using the GDPR as its blueprint. Since the CCPA went into effect, 20 US states have enacted comprehensive consumer data privacy laws. Colorado followed closely behind California’s footsteps, becoming the third state (after Virginia) to do so, continuing the wave.
Along the same vein, Colorado became the first jurisdiction in the US to enact a comprehensive regulation of artificial intelligence, known as the Colorado Artificial Intelligence Act in 2024.
The CAIA’s Blueprint, Inspired by the EU AI Act
Set to take effect on Feb. 1, 2026, the Colorado Artificial Intelligence Act is the first state law specifically regulating the use of “high-risk” AI systems. Under the CAIA, companies that develop or deploy high-risk AI must adhere to defined requirements.
A high-risk AI system is defined as one used to make, or that plays a substantial role in making, a decision that materially impacts a person’s life. Key areas of impact include education, employment, financial services, health care, housing, legal or criminal justice outcomes, and public benefits or social services.
The CAIA requires developers and deployers of high-risk AI systems to implement risk management frameworks, conduct impact assessments, and provide clear disclosures about AI usage.
Transparency and fairness are central principles of the CAIA. It mandates that consumers be notified when they are subject to AI-driven decisions affecting them. Moreover, the law provides for human review or other avenues of recourse concerning AI-generated decisions. Enforcement of the CAIA falls under the authority of Colorado’s Attorney General.
Inspired by the EU AI Act, the CAIA seeks alignment with emerging global AI governance principles.
The EU AI Act serves as a significant reference point for the CAIA. Similar to the EU model, the CAIA distinguishes clearly between low-risk and high-risk AI systems. Under the EU AI Act, high-risk AI systems are those that have a significant harmful impact on people’s health, safety and fundamental rights. The EU AI Act also divides AI applications into four categories—unacceptable risk, high risk, limited risk, and minimal risk—each with separate regulatory measures.
While the CAIA emphasizes algorithmic accountability, human oversight, and transparency and fairness, reflecting core aspects of the EU regulation, it specifically focuses its framework on high-risk AI systems that make consequential decisions—those that have a significant legal or similar impact on consumers.
The EU AI Act goes further in certain respects. For example, it explicitly bans real-time biometric surveillance in public spaces and requires pre-market conformity assessments for some AI systems—provisions that the CAIA doesn’t currently include.
Narrow in scope compared to the EU AI Act’s broader focus on societal and fundamental rights impacts, the CAIA primarily aims to prevent discriminatory harms against consumers. But differences notwithstanding, Colorado’s law represents a significant milestone in aligning domestic AI governance with international regulatory trends.
Peeking Into US AI Governance’s Future
Colorado’s pioneering enactment firmly positions the state as a national leader in AI and data governance—much like California’s California Consumer Privacy Act did for privacy law. And while Colorado’s is the sole comprehensive state AI regulation, a host of US jurisdictions have enacted more specific AI laws in recent years, crafting their own AI-specific regulatory frameworks. This emerging mosaic of state laws echoes the pattern of privacy regulation sparked by California’s early leadership.
The CAIA’s journey from introduction to enactment to effect has not been smooth sailing. On Aug. 6, Colorado’s governor called for a special session set for later this month, in part to address concerns raised by his office and tech companies about the law’s scope, signaling potential revisions.
Should the law remain intact after the special session, Colorado’s comprehensive, risk-based approach to AI governance may prove to be the catalyst that inspires a broader state-level movement—which, in turn, could add momentum for more cohesive and coordinated federal AI legislation. The dynamic interplay between state innovation and federal oversight is becoming increasingly critical as AI technologies move at a breakneck pace.
With AI capabilities ever-evolving and public scrutiny intensifying, the central question of AI governance is shifting accordingly: It is no longer a matter of whether comprehensive AI regulation will spread, but rather how consistently and coherently AI governance will be applied across the US.
The coming months and years will be pivotal in shaping whether the US develops a patchwork of divergent state rules or forges a unified regulatory framework that balances innovation, risk mitigation, and public trust.
The new report, Artificial Intelligence: The Impact on the Legal Industry, is available to subscribers here. Non-subscribers can click here to download the report.
In previous articles in this series: Bloomberg Law Legal Analyst Eleanor Tyler’s July 29 article examined AI slop in litigation and possible remedies. Bloomberg Law Legal Analysts Janet Chanchal’s and Linda Masina’s Aug. 13 piece looked at AI washing and how the legal profession can curb it. Janet Chanchal’s Aug. 14 piece looked at how lawyers are using AI, and Linda Masina’s Aug. 15 article focused on law firm operations and AI.
Bloomberg Law subscribers can find related content on our In Focus: Artificial Intelligence resource and our AI Legal Issues Toolkit.
If you’re reading this on the Bloomberg Terminal, please run BLAW OUT in order to access the hyperlinked content, or click here to view the web version of this article.
To contact the reporter on this story:
To contact the editor responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
Learn About Bloomberg Law
AI-powered legal analytics, workflow tools and premium legal & business news.
Already a subscriber?
Log in to keep reading or access research tools.