Colorado Lawmakers Pass Landmark AI Discrimination Bill – and Employers Across the Country Should Take Notice
Insights
5.10.24
Colorado is close to becoming the first state to enact a law prohibiting employers from using artificial intelligence to discriminate against their workers – and requiring companies to take extensive measures to avoid such algorithmic discrimination. The landmark AI bill, which passed the state legislature on May 8, would also impose broad rules on developers of high-risk artificial intelligence systems and the businesses that use them. If enacted, the law would fully take effect in 2026 and target discrimination arising from AI used to make important decisions – such as hirings and firings. We’ll cover the key points so Colorado employers can prepare and businesses across the country can understand what is likely to eventually head your way.
[Editor’s Note: Gov. Jared Polis signed SB 205 into law on May 17.]
What’s This About?
The Colorado legislature passed SB 205 on May 8. If signed by Gov. Jared Polis, the bill would apply to certain companies that develop or use “high-risk artificial intelligence systems” – meaning AI systems that make or are a substantial factor in making “consequential decisions” impacting:
- education enrollment or an education opportunity;
- employment or an employment opportunity;
- a financial or lending service;
- an essential government service;
- healthcare services;
- housing;
- insurance; or
- a legal service.
One of the main purposes of the bill is to prevent “algorithmic discrimination” – which occurs when the use of AI systems results in unlawful differential treatment or impact disfavoring certain individuals or groups based on protected classifications (for example, age, disability, race, religion, or sex).
If this sounds familiar, it’s because it seems to closely model the EU’s AI Act, which also classifies AI use in employment settings as “high risk.”
What Would the Bill Require and Who Would It Apply To?
If enacted, “developers” and “deployers” of high-risk AI systems would have a legal duty, beginning on February 1, 2026, to use reasonable care to avoid algorithmic discrimination in the AI system.
- Developer means any person doing business in Colorado that develops or intentionally and substantially modifies an AI system.
- Deployer means any person doing business in Colorado that uses a high-risk AI system.
The attorney general would have exclusive authority to enforce the new rules. In an enforcement action, there would be a rebuttable presumption that a developer or deployer used reasonable care if it complied with the rules applicable to it under the new bill and any other rules established by the attorney general.
Rules Applicable to Developers
The rules require developers to, among other things:
- make extensive information available to deployers and other developers, such as the known harmful or inappropriate uses of its high-risk AI system and summaries of the type of data used to train it;
- provide a public statement, such as on their website, summarizing the types of high-risk AI systems and how they manage known or reasonably foreseeable risks of algorithmic discrimination; and
- timely disclose to the attorney general all known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of the high-risk AI system.
Rules Applicable to Deployers (Businesses Using High-Risk AI Systems)
The rules require deployers – subject to the exceptions discussed below – to, among other things:
- implement a risk management policy and program that governs their use of a high-risk AI system and meets detailed specifications laid out in the new rules;
- regularly and systematically review that risk management policy and program;
- complete impact assessments for the high-risk AI system (or contract with a third party to do so) at least annually and within 90 days after any major modifications to the system is made available;
- notify consumers when a high-risk AI system will be used to make a consequential decision and provide related disclosures;
- make a statement on their website summarizing, among other things, details related to the AI systems they use and how they manage risks of algorithmic discrimination related to using those systems.
Exceptions for Certain Deployers
A deployer is exempt from these requirements (other than the consumer notice and disclosure requirements) if, at all times while it uses the high-risk AI system:
- the deployer employs fewer than 50 full-time equivalent employees;
- the deployer does not use its own data to train the AI system;
- the AI system is used for the intended uses as disclosed by the developer and continues learning based on data derived from sources other than the deployer’s own data; and
- the deployer makes certain information related to impact assessments available to consumers.
Consumer Disclosures Required When Any AI System Is Used
Under the new bill, any deployer or other developer that uses, sells, or in any way makes available an AI system that’s intended to interact with consumers – regardless of whether the system is “high-risk” or whether the system is used to make a consequential decision – must disclose to each consumer who interacts with the system that they are interacting with an AI system. An exception applies when it would be obvious to a reasonable person that they’re interacting with an AI system.
No Restrictions on Certain Activities
The rules clarify that they do not restrict a developer’s or deployer’s ability to engage in certain activities, such as complying with other legal obligations, taking immediate steps to protect a consumer’s life or physical safety, or engaging in specified research activities.
What’s Next?
If SB 205 is enacted, your business in Colorado could become subject to these sweeping new rules. Whether you’re a developer of high-risk AI systems or an employer that uses them, you will need to step up and pay attention. The new rules demonstrate the importance of adopting an AI risk-management policy and program. Our AI Governance team can help you craft a policy, prepare for impact assessments, and navigate this landmark legislation. [Editor’s Note: Gov. Jared Polis signed SB 205 into law on May 17.]
If you are located outside of Colorado, don’t think this type of law will avoid you for very long. Both California and New York are actively debating similar laws that would prohibit certain types of AI uses in the workplace and require a high level of transparency – and would require you to either accommodate a worker who doesn’t want to be the subject of AI use or avoid using it for them altogether. Seeing another state beat them to the punch will likely light a fire under lawmakers in those – and other states – to catch up. We’re sure to see more activity across the country in the coming weeks and months.
Conclusion
We will continue to monitor developments as they unfold. Make sure you subscribe to Fisher Phillips’ Insight System to gather the most up-to-date information on AI and the workplace. Should you have any questions on the implications of these developments and how they may impact your operations, please do not hesitate to contact your Fisher Phillips attorney, the authors of this Insight, any attorney in our Denver office, or any attorney in our Artificial Intelligence Practice Group.
Related People
-
- Vance O. Knapp
- Counsel
-
- Erica G. Wilson
- Associate