Regulating AI – The Next “Brussels Effect”?

by Katja Langenbucher

Photo of Professor Katja Langenbucher

How to deal with the challenges of Artificial Intelligence has been at the forefront of lawmakers’ and regulators’ initiatives around the world. The FTC has in August announced that it is exploring rules to tackle commercial surveillance, the SEC’s Chair Gary Gensler voiced concerns over AI in the Fintech space, and the CFTC has issued its own “primer” on AI. The Council of the European Union has this morning adopted its common position on a new regulation, the “Artificial Intelligence Act”.  Most other EU institutions have already issued their comments, the EU Parliament is scheduled to pass a final vote on the report in the first quarter of 2023.

With the aim of fostering innovation and at the same time creating an “ecosystem of trust”, the Act has opted for a somewhat unusual approach. Rather than comprehensively regulating all the players in the AI world, its strategy is product regulation. The Act sorts AI applications into different risk categories and shapes compliance requirements accordingly. Under this framework, few AI applications are entirely prohibited, but many face no or minimal obligations. However, those which the EU considers “high-risk” must comply with newly established rules. This regulatory strategy makes for a clear focus on developers of AI applications and their users. By contrast, the Act does not include consumers’ private rights of action. Most private rights depend on Member State laws, however, some are addressed in sectoral EU legislation. The pending reform of the Consumer Credit Directive provides an example: It regulates AI scoring, lending platforms and – for the first time – includes an explicit prohibition of discriminatory lending practices similar to the US ECOA.

An AI application is prohibited where the AI impairs human autonomy and decision-making via the use of “stimuli (…) beyond human perception” (Recital 16). More prohibitions concern certain “social scoring” practices (Art. 5 para. 1 lit. c) and certain uses of biometric identification in public spaces for the purposes of law enforcement (Art. 5 para. 1 lit. d with exceptions in para. 2).

The high-risk classification of an AI application depends on “the intensity and the scope of the risks that AI systems can generate” (Recital 14). How to assess risk depends on the type of AI application.

A first category comprises AI applications which are safety components of a product or are products themselves. They automatically fall into the high-risk category if they are required to undergo third party conformity assessments. This captures products as diverse as toys, lifts, cableway installations and medical devices. The developer of the AI application (rather than a public agency) is required to run conformity assessments prior to putting the AI on the market. Private standard-making bodies will develop guidance on how to assess conformity with the AI Act. Compliance with such guidance will then lead to a presumption of conformity with the Act’s rules, not with other legal rules such as, for instance, the GDPR. For AI systems that operate in an area where conformity assessment procedures exist, standard-setting bodies such as the European Committee for Standardisation (CEN) will be important rule-setters. Consequently, there is concern regarding lobbying and regulatory capture of these bodies.

AI systems where no conformity assessment procedures exist form the second category. These stand-alone AI applications are held to a different risk-based standard. The Act lists three relevant risks, namely harm to health, safety, or fundamental rights. An Annex to the Act specifies a list of critical areas of use for these stand-alone AI systems. These areas encompass (1) biometric identification, (2) critical infrastructure, (3) education, (4) employment, (5) essential private services, (6) law enforcement, (7) migration, and (8) administration of justice and democratic processes. The Commission has the power to update the Annex, but it may not add new areas.

For stand-alone AI applications, the Act requires various governance, data and model quality procedures. A first set of requirements looks to data governance and management practices to ensure high quality training, validation, and testing data (Recital 44). A second set requires developers to ensure a certain degree of transparency for users (Recital 47). Developers must provide relevant documentation, instructions of use and concise information about relevant risks. A third set of requirements concerns human oversight. AI applications must include operational constraints that cannot be overridden by the AI and adequate training for the people in charge of human oversight must be ensured (Recital 48).

Market surveillance and enforcement is public, not private. Member States will have to designate authorities charged with this task unless developers or users of AI applications are already regulated entities. This can be true for the first category of AI applications which are safety components or products, for instance self-driving cars or medical devices. As to the second category, the stand-alone AI applications, financial supervisors and regulators will take over market surveillance and enforcement for financial institutions which develop or use AI. For many Fintech applications, this leads to a possibly unfortunate dual track with financial institutions supervised by a financial regulator but scoring bureaus or some lending platforms by more general authorities which Member States have designated.

For the US, the AI Act might trigger yet another “Brussels effect” given that its scope of application extends to any developer of AI applications which are put into service in the EU. This is irrespective of whether the developers are physically present in the EU. The same is true for third-country users if the output they produce is used in the EU (Art. 2 para. 1 lit. a, c). It might also provide arguments for both, Congress and regulators, to move ahead with various pending initiatives, some of them bipartisan. Regulating AI is often about technical issues as captured in the EU’s product regulatory approach. However, at its core we find normative underpinnings such as human rights, algorithmic fairness and human autonomy which require a sustained global effort.

Katja Langenbucher is a law professor at Goethe-University’s House of Finance in Frankfurt, affiliated professor at SciencesPo, Paris, and long-term guest professor at Fordham Law School, NYC, SAFE Fellow with the Leibniz Institute for Financial Research SAFE.

The views, opinions and positions expressed within all posts are those of the author alone and do not represent those of the Program on Corporate Compliance and Enforcement (PCCE) or of New York University School of Law. PCCE makes no representations as to the accuracy, completeness and validity of any statements made on this site and will not be liable for any errors, omissions or representations. The copyright of this content belongs to the author and any liability with regards to infringement of intellectual property rights remains with the author.