Recently Enacted AI Law in Colorado: Yet Another Reason to Implement an AI Governance Program

by Avi GesserErez Liebermann, Matt KellyMartha HirstAndreas Constantine PavlouCameron Sharp, and Annabella M. Waszkiewicz

Photos of the authors.

Top left to right: Avi Gesser, Erez Liebermann, Matt Kelly, and Martha Hirst. Bottom left to right: Andreas Constantine Pavlou, Cameron Sharp, and Annabella M. Waszkiewicz. (Photos courtesy of Debevoise & Plimpton LLP)

On May 17, 2024, Colorado passed Senate Bill 24-205 (“the Colorado AI Law” or “the Law”), a broad law regulating so-called high-risk AI systems that will become effective on February 1, 2026.  The law imposes sweeping obligations on both AI system deployers and developers doing business in Colorado, including a duty of reasonable care to protect Colorado residents from any known or reasonably foreseeable risks of algorithmic discrimination.

The authors have previously written about the importance of implementing AI governance programs to: (i) better identify, test, and adopt low-risk/high-value AI use cases; (ii) avoid spending significant amounts of time and resources on high-risk/low value AI use cases; and (iii) reduce regulatory and reputational risks associated with adoption of AI.  Recent AI enforcement actions by the SEC, the imminent final passage of the EU AI Actguidance from the NSA and FBI on cybersecurity risks associated with AI adoption, the Treasury’s recent Report on AI, as well as recent U.S. state law developments all reinforce the incentives for companies to adopt AI governance programs.

This blog post provides a high-level overview of the Law passed by Colorado, explores applicable governance requirements and use cases, and offers several tips for companies looking to implement or improve their AI governance programs.

A. Key Takeaways from the Colorado AI Act

  • Scope the applicability. Whether as a developer or a deployer, understand which systems are in scope as high-risk AI systems.
  • Create a roadmap for compliance. Although the implementation period is long—with over 18 months to go—ensuring compliance (especially with the transparency obligations) will take a significant amount of time and resources for many companies.
  • Modify contracts. It will take time to modify contractual arrangements between deployers and developers to ensure that all of the required information is provided.
  • Choosing a risk management framework. As discussed below, the law requires the implementation of a risk management framework, as well as detailed impact assessments. It also provides a safe harbor if that risk management framework complies with certain recognized standards, such as NIST AI RFM, so companies should consider whether they want to try to meet those standards, which will take substantial time and resources.

B. Overview of the Colorado AI Law

Defining “High-Risk” AI Systems

An AI system is considered “high-risk” if, when deployed, it “makes, or is a substantial factor in making, a consequential decision.”  A consequential decision is one that has a “material legal or similarly significant effect” relating to the “provision or denial” to any consumer of (i) educational enrollment; (ii) an employment opportunity; (iii) financial or lending services; (iv) an essential government service; (v) healthcare-related services; (vi) housing; (vii) insurance; or (viii) legal services.

The Law includes a list of technologies that are expressly excluded from the definition of high-risk AI systems, including, e.g., calculators, databases, and spreadsheets.  However, these technologies are not excluded to the extent the technologies “make, or are a substantial factor in making, a consequential decision.”  Because the definition of a high-risk system already includes a requirement that the technology “makes, or is a substantial factor in making, a consequential decision,” it appears the list of excepted technologies does not change the scope of high-risk AI systems under the Law, practically speaking.

Exemptions and Safe Harbors

The Law provides carve-outs for various regulated entities.  For example, insurers are considered to be in compliance with the Law if they are subject to Colorado’s AI insurance statutes and the regulations promulgated thereunder.  It is unclear, however, if the exemption applies only to areas of overlap, or whether insurers in compliance with the Colorado AI insurance law would be exempt from the general Colorado AI law even as-applied to AI systems, such as resume screening tools, that would not be covered by the insurance law.

Similarly, banks may be deemed fully compliant if they are subject to examination by a state or federal prudential regulator under any published guidance or regulations that apply to the use of high-risk AI systems and meet certain specified criteria (i.e., regulations that are at least as restrictive as the Law and impose audit and risk mitigation obligations relating to high-risk AI systems).  Because it is unclear whether any such guidance or regulations currently exist, however, it is unclear how impactful this exemption will be in practice.

In any action to enforce the law, it is an affirmative defense that the defendant (i) discovered and cured the violation under certain circumstances that are laid out in detail, such as a result of red teaming, or (ii) is otherwise in compliance with the NIST AI RMF or another nationally- or internationally-recognized risk management framework for AI systems, if the standards are substantially equivalent or more stringent.  The safe harbor derived from compliance with AI risk management frameworks may prove challenging to utilize given the detailed requirements set forth in frameworks, such as NIST AI RMF and ISO/IEC 42001.

Developer and Deployer Obligations

The Law requires both developers and deployers of high-risk AI systems to use reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination.  If the system complies with the developer- and deployer-specific obligations under the Law, then it will create a rebuttable presumption that the deployer used reasonable care.

Developer-Specific Obligations

Disclosures: The Law requires developers of high-risk AI systems to disclose a significant amount of information to deployers of those systems, including the following:

  • the intended purpose, outputs, and benefits of the system;
  • the reasonably foreseeable uses;
  • the known harmful or inappropriate uses;
  • “all other information necessary to allow the deployer to comply with the requirements of [the Law]”;
  • documentation that describes:
    • how, prior to the offer or sale of the AI system, the system was evaluated for performance and mitigation of algorithmic discrimination;
    • the data governance measures that were in place for the training datasets and the measures used to examine the suitability of data sources, possible biases, and appropriate mitigation;
    • to the extent feasible, through artifacts such as model cards, dataset cards, or other impact assessments, the information necessary for a deployer to complete an impact assessment; and
    • any additional information that is reasonably necessary to assist the deployer in understanding the outputs, monitoring the system’s performance, and comply with the requirements of the Law.

Public Notice: Developers are also required to disclose the types of high-risk AI systems they have developed and how they manage the risks of algorithmic discrimination publicly on their websites.  Such statements must be updated to remain accurate and no later than 90 days after significant changes to the impacted systems.

Attorney General Notice: Developers must disclose to the Attorney General any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of the system no later than 90 days after discovery of algorithmic discrimination, or after receiving notice of algorithmic discrimination from a deployer.

Deployer-Specific Obligations

The Law requires deployers of high-risk AI systems to implement the following measures:

  • Risk Management: Implement a risk management policy and program to govern the deployer’s use of the high-risk AI system.  The policy must specify the principles, processes, and personnel used to identify and mitigate algorithmic discrimination.  The Law specifies numerous considerations that must be taken into account in the risk evaluation.
  • Impact Assessment: Complete an impact assessment for the system annually and within ninety days of any substantial modification that covers (i) purpose, intended use cases, deployment context, and benefits associated with the high-risk AI system; (ii) an analysis of whether the deployment of the high-risk AI system poses risk of algorithmic discrimination, the nature of that risk, and how it has been mitigated; (iii) a description of the categories of data that serve as inputs and the outputs produced by the high-risk AI system; (iv) the categories of data that the deployer used to customize the high-risk AI, if applicable; (v) the known limitations of the high-risk AI system and the metrics used to evaluate performance; (vi) a description of transparency measures taken, such as consumer disclosures; (vii) a description of post-deployment monitoring and user safeguards; and (viii) a description of any unforeseen uses.
  • Consumer Disclosure of AI Use: Inform consumers that they are interacting with an AI system, unless it would be obvious to a reasonable person that the person is interacting with an AI system.  If the AI system will be used as a substantial factor in making a consequential decision relating to them, the disclosure must include the nature of the decision, the role of the AI system, contact information for the deployer, and any applicable opt-out rights.
  • Consumer Disclosure of AI Decision and Appeal Opportunity: Inform consumers of the principal reason or reasons for the consequential decision if it is adverse to the consumer, including: (i) how the AI system contributed to the decision; (ii) the sources and types of data that were used by the AI system in making the decision; and (iii) an opportunity to correct any incorrect personal data that the AI system processed in making the consequential decision, as well as an opportunity to appeal, which must—if technically feasible—allow for human review, unless an appeal is not in the best interest of the consumer (for example, locking an online account in response to suspected fraudulent activity).
  • Customer Disclosure of Existing Opt-Out Rights: If applicable, inform consumers about existing rights under the Colorado Privacy Act to opt out of the processing of personal data for AI decisions.
  • Public Disclosure: Provide information on the deployer’s website, including: (i) the types of high-risk AI systems that are currently deployed by the deployer; (ii) how the deployer manages risks of algorithmic discrimination; and (iii) the nature, source, and extent of the information collected and used by the deployer.  Companies will need to regularly update these disclosures to reflect the deployment and management of high-risk AI systems.
  • Notification to Colorado Attorney General: Notify the Colorado Attorney General if algorithmic discrimination is discovered no later than 90 days after the date of the discovery by the deployer.

Violations

The Law does not create a private right of action; the Colorado Attorney General has exclusive enforcement authority.  Moreover, violations are considered unfair trade practices under Colorado state law.  Note that the Law creates a rebuttable presumption that a deployer used reasonable care if the deployer complied with the Law’s governance and compliance requirement.

C. Comparison with the EU AI Act

The Colorado AI Law has many similar provisions to the EU AI Act—both adopt a risk-based approach to AI regulation—the Colorado law is narrower in many ways than its EU counterpart.

The Colorado AI Law and the EU AI Act both regulate “high-risk AI systems” by imposing certain (often similar) governance, transparency, and information requirements on the developers and deployers of those systems.  For example, both laws impose certain broad information disclosure obligations and AI governance requirements (including risk assessments, impact assessments, and transparency requirements) on the developers and deployers of high-risk AI systems, respectively.

However, the jurisdictional and substantive scope of the laws are different in several important ways.  For example:

  • Territorial Scope: The EU AI Act has a broader territorial scope.  Whereas the Colorado law applies only to AI developers or deployers doing business in the state, the EU law covers AI developers, deployers, importers, and distributors wherever they are established provided their AI systems affect users within the EU or the output of the AI system is used within the EU.
  • Material Scope: While the Colorado law covers only “high-risk” AI systems, the EU law also regulates “prohibited-risk” AI systems, lower-risk AI systems that trigger transparency obligations, and general-purpose AI systems, including those presenting “systemic risk.”
  • High-Risk Definition: The Colorado law and the EU AI Act adopt different definitions of “high-risk” AI systems, with the EU having a slightly more expansive list of covered AI systems than Colorado.

D. Elements of an AI Governance Program

Although Colorado is the first state to enact an AI regulation of this type, many states are in the process of enacting some form of AI regulation, most of which will require some form of AI governance for companies that are adopting AI for significant parts of their operations.  Such companies should therefore consider implementing the following elements of an AI governance program to be ready for these coming AI compliance obligations.

  • Scope. Determine which kinds of models, algorithms, big data systems, and AI applications will be covered by the company’s AI governance and compliance program, which tools are not covered, and why.  Because AI definitions are often vague and difficult to apply in practice, it is best to include several concrete examples of what is and is not covered.
  • InventoryFor each application of AI that is governed by the program, document details about the application, which may include its purpose, the problem it is intended to solve, the inputs and outputs, the training set, the anticipated benefits to the company and its customers, potential risks, who may be harmed, whether the model involves automated decision-making or human oversight, any necessary safeguards, and who is responsible for the successful deployment of the AI application.
  • Guiding PrinciplesCreate a high-level set of guiding principles for the design, development, and use of AI, which may include commitments to accountability, fairness, privacy, reliability, and transparency.
  • Code of Conduct. Draft an employee-facing AI Code of Conduct or Acceptable Use Policy to operationalize the guiding principles.
  • Cross-Functional AI Governance CommitteeEstablish a cross-functional committee that oversees the program or implements other means for establishing overall accountability, including vetting new high-risk uses and identifying mitigations that will allow for their continued use; overseeing policies, procedures, and guidelines for responsible AI use; reporting to senior management or the board; managing AI-related incidents, and addressing business continuity risks related to AI applications.
  • Risk Factors and Assessments. Create a list of risk factors to help classify AI applications as low- or high-risk (with examples) and determine how AI applications will be assessed for risk.  This allows an organization to prioritize the highest-risk models for the cross-functional committee to review.
  • Risk Mitigation Measures. Create a set of possible risk mitigation measures that the Governance Committee can implement to reduce the risks associated with certain high-risk AI applications, including bias assessments, stress testing, enhanced transparency, or additional human oversight, as appropriate.
  • Training. Provide training for individuals involved in developing, monitoring, overseeing, testing, or using high-risk AI applications on the associated legal, operational, and reputational risks.
  • Policy Updates. Update critical policies to address unique risks associated with AI applications, including policies relating to recordkeeping, privacy, data governance, model risk management, and cybersecurity.
  • Incident Response. Create a plan for responding to an allegation of bias or other deficiency in an AI application and conduct an AI incident tabletop exercise to test the plan.
  • Public Statements. In light of recent regulatory guidance and enforcement activity on AI washing, review the company’s public statements relating to its use of AI to ensure their accuracy.
  • Vendor Risk Management. Review vendor policies to ensure that AI applications that are provided by third parties have been subjected to appropriate diligence and contractual provisions.
  • Senior Management and Board Oversight. Develop a plan for periodic reporting to senior management and the board on the AI governance program.
  • Documentation. Maintain documentation about the program to address concerns, respond to inquiries, and that meets regulatory expectations.

Avi Gesser and Erez Liebermann are Partners, Matt Kelly is a Counsel, Martha Hirst and Andreas Constantine Pavlou are Associates, and Cameron Sharp and Annabella M. Waszkiewicz are Law Clerks at Debevoise & Plimpton LLP. This post first appeared on the firm’s blog.

The views, opinions and positions expressed within all posts are those of the author(s) alone and do not represent those of the Program on Corporate Compliance and Enforcement (PCCE) or of the New York University School of Law. PCCE makes no representations as to the accuracy, completeness and validity or any statements made on this site and will not be liable any errors, omissions or representations. The copyright of this content belongs to the author(s) and any liability with regards to infringement of intellectual property rights remains with the author(s).