The EU AI Act – Navigating the EU’s Legislative Labyrinth

by Avi GesserMatt KellyMartha HirstSamuel J. AllamanMelissa Muse, and Samuel Thomson

From left to right: Avi Gesser, Matt Kelly, Martha Hirst, Samuel J. Allaman, and Melissa Muse. Not pictured: Samuel Thomson. (Photos courtesy of Debevoise & Plimpton LLP).

As legislators and regulators around the world are trying to determine how to approach the novel risks and opportunities that AI technologies present, the draft European Union Artificial Intelligence Act (the “EU AI Act” or the “Act”) is a highly anticipated step towards the future of AI regulation. Despite recent challenges in the EU “trilogue negotiations”, proponents still hope to reach a compromise on the key terms by 6th December, with a view to passing the Act in 2024 and most of the provisions becoming effective sometime in 2026.

As one of the few well-progressed AI-specific laws currently in existence, the EU AI Act has generated substantial global attention. Analogous to the influential role played by the EU’s GDPR in shaping the contours of global data privacy laws, the EU AI Act similarly has the potential to influence the worldwide evolution of AI regulation.

This blog post summarizes the complexities of the EU legislative process to explain the current status of, and next steps for, the draft EU AI Act. It also includes steps which businesses may want to start taking now in preparation of incoming AI regulation.

What is the EU AI Act?

The EU AI Act is the bloc’s AI-specific regulation that will regulate the sale and use of AI across the Union. It will have broad extraterritorial reach, covering AI providers or users, regardless of where they are established, provided their AI systems affect users within the EU.

The details of the Act are still being negotiated, so are subject to change. However, the broad outline and structure of the Act have remained relatively constant, and it appears highly likely that this basic framework will serve as the basis of any agreed text.

Based on the current drafts, the Act proposes to regulate AI through a four-tier system according to their potential risks:

  1. Unacceptable risk systems. These will be outright prohibited.
  2. High risk systems. These will likely have to meet onerous requirements to prove that they do not pose a significant threat to fundamental rights. The specific detail of these requirements is yet to be agreed but could include heightened data governance standards, monitoring and record-keeping rules, heightened standards for cybersecurity and transparency, as well as human oversight obligations.
  3. Limited risk systems. Systems in this category will be subject to less onerous restrictions. Theses could include transparency requirements, such as an obligation to flag to users when content has been artificially generated.
  4. Minimal risk systems. These constitute most AI systems in existence. Systems in this category are unlikely to be subject to any new restrictions or obligations.

Member States will be required to designate AI oversight authorities to ensure compliance with the new regulations. While some countries have indicated that they intend to allocate responsibility to their existing data protection authorities, others (such as Spain) will create specific AI-focused agencies.

Where are we in the legislative process?

EU legislation drafting and negotiation process involves three different parties: the European Commission (which represents the European interest), the European Council (which represents governments) and the European Parliament (which represents citizens). The drafting of the AI Act has been a lengthy process which has spanned several years.

  • On 21 April 2021 the Commission published its draft legislation governing the use of AI; this forms the basis of the negotiation discussions. The Commission’s draft is then simultaneously reviewed by the Parliament and Council.
  • In order to expedite the negotiation process, in December 2022 the Council adopted its non-binding draft of the AI Act, known as a general approach, to give the Parliament an indication of the Council’s position prior to the first round of Parliament negotiations.
  • Following several months of negotiations, the Parliament then adopted its draft version of the AI Act in June 2023.
  • These three draft versions of the Act form the basis of the Commission’s, Council’s and Parliament’s negotiation mandates for informal, closed-door meetings – known as trilogues – between representatives of the three bodies to secure provisional agreement on the Act’s final form. While the mandates give us some indication of what the Act may ultimately look like, we won’t have any concrete answers until the final text is agreed.
  • In an attempt to conclude this legislative process before the next Parliamentary elections in summer 2024, the trilogues have followed an expedited timeline. The final scheduled meeting is due to take place on 6 December 2023; this is reportedly being treated as an informal deadline by the EU institutions.
  • If a final agreement is reached, the draft will be returned the Council and Parliament for formal approval before the Act is formally passed. Under the current timeline, this is expected to happen in the first half of 2024, ahead of the summer Parliamentary elections. If it is passed, most of the Act will then come into force following a two-year transitional period – likely in the first half of 2026. In response, the European Commission has introduced the AI Pact, encouraging businesses to voluntarily commit to complying with the Act before the legal deadline.

The negotiation process has been further complicated by the unprecedented dynamism of the AI technology that underpins the regulation. For example, the recent rise in public access to, and media coverage of, popular generative AI applications such as ChatGPT and Bard have added new dimensions to discussions surrounding the intended regulation of foundational AI models. Recent trilogue discussions have reportedly deliberated about adding additional transparency, documentation, performance, and auditing obligations for developers of foundational models like GPT, Llama 2, or Stable Diffusion 2. This has sparked a debate over whether this amounts to a regulation within a regulation and if the EU AI Act is the proper vehicle to introduce these new obligations that could potentially impact innovation within a burgeoning industry. Additionally, the speed at which this technology is developing means that what it is capable of two years ago (when the Commission published its draft Act) is, in some cases, monumentally different from what it is capable of now.

These technological advancements underscore the inherent challenges in regulating AI, potentially accounting for some of the disparities in the draft laws proposed by each of the three EU institutions. Additionally, while the trilogue parties will operate within the confines of their negotiation mandates, the pace of development in this area means that we could end up with a final text of the EU AI Act that is tangibly different from the earlier drafts.

How to prepare

While there is ongoing uncertainty about the detail of the final EU AI Act, and most of the Act will not be enforceable for a couple of years, it is clear that EU-wide AI regulation is on the horizon. Further, businesses should be mindful of any existing AI-related governance and compliance obligations under technology-neutral laws, such as the GDPR, employment laws, anti-discrimination laws and human rights legislation.

Given the challenges for businesses in trying to retrospectively implement AI governance structures, there are several steps that businesses may want to consider taking now to prepare.

For example, businesses should consider designating a committee (which could be AI-specific) to oversee and monitor the business’ use of AI tools, including use cases of those tools more broadly. The committee might initially consist of a small number of individuals, with additional representatives from other business units (such as legal, compliance, risk, finance and IT) being added as the business’ AI use, and associated compliance program, matures. Establishing an AI committee also helps to create a small group of AI subject-experts within your organization who can spend the time studying the requirements of applicable regulations, such as the EU AI Act, and understanding how they apply to the business’ AI usage.

Having established an AI-oversight committee, businesses can then consider the policies and procedures required to ensure an effective and compliant AI-governance framework. In particular, the EU AI Act will likely impose additional compliance and documentation obligations on businesses that use high-risk and limited-risk AI systems. Businesses subject to the EU AI Act will need to ensure that their AI compliance programs incorporate these requirements, and effectively identify which AI systems and use cases may trigger them.

For example, this may include:

  • creating an inventory of the AI tools the business has access to and its use cases in production;
  • creating an AI-related incident response plan to ensure AI incidents are reported to regulatory authorities as required;
  • implementing and documenting quality management procedures;
  • developing a risk-rating framework for the business’ AI tools and use-cases;
  • determining how the firm will identify and mitigate high-risk AI tools and use cases; and
  • amending the business’ cybersecurity procedures to account for any novel cyber risks that AI tools present.

Businesses may also wish to consult the AI-specific guidance published by various European regulators, including the German Baden-Wurttemberg Commissioner for Data Protection, Italian Garante, French CNIL, and the Confederation of European Data Protection Organizations, when designing their AI compliance structures.

Avi Gesser is a Partner, Matt Kelly is Counsel, Martha HirstSamuel J. Allaman and Melissa Muse are Associates, and Samuel Thomson. is a Trainee Solicitor at Debevoise & Plimpton LLP. The post was first published on the firm’s blog. 

The views, opinions and positions expressed within all posts are those of the author(s) alone and do not represent those of the Program on Corporate Compliance and Enforcement (PCCE) or of the New York University School of Law. PCCE makes no representations as to the accuracy, completeness and validity or any statements made on this site and will not be liable any errors, omissions or representations. The copyright of this content belongs to the author(s) and any liability with regards to infringement of intellectual property rights remains with the author(s).