Category Archives: EU AI Act

The Rise of Audits as a Regulatory Tool for Tech

by Janet Kim, Matthew Bruce, Lutz Riede, Tristan Lockwood, Fiona McHugh, Florentine Schulte-Rudzio, and Bhavya Sharma

Photos of the authors

Top left to right: Janet Kim, Matthew Bruce, Lutz Riede, and Tristan Lockwood. Bottom left to right: Fiona McHugh, Florentine Schulte-Rudzio, and Bhavya Sharma (photos courtesy of Freshfields LLP)

As technology evolves, so do challenges in effectively regulating it. In an era where there is increasing focus on effective oversight of digital platforms, legislators are turning to audits as a go-to tool. This blog explores the reasons behind the growing adoption of audits in digital regulation, focusing on key legislative frameworks such as the EU’s Digital Services Act (DSA) and the UK’s Online Safety Act (OSA), and also explores the scope of audits in AI and other digital regulation. It also includes some practical tips for businesses navigating these new audit regimes.

Continue reading

For AI Innovators Seeking to Mitigate the Risks of Regulatory Uncertainty, It Pays to Remember the Fundamentals

by Charles V. Senatore

Photo of the author.

Photo courtesy of the author

For many years, regulatory uncertainty in the United States has been part of the landscape for innovators, particularly with the rise of emerging technologies such as cryptocurrencies, blockchain, and artificial intelligence.  It can, unfortunately, thwart the progress of responsible innovation and place our innovators at a competitive disadvantage. 

We recently have seen a dramatic example of regulatory uncertainty in the artificial intelligence space. 

Continue reading

The EU AI Act Countdown Is Over: First Wave of Requirements Now in Force

by Avi Gesser, Matt Kelly, Martha Hirst, and Samuel Thomson

Photos of the authors

Left to right: Avi Gesser, Matt Kelly, Martha Hirst, and Samuel Thomson (Photos courtesy of authors)

The first wave of the EU AI Act’s requirements came into force on 2 February 2025, namely:

  • Prohibited AI: the ban on the use and distribution of prohibited AI systems, and
  • AI Literacy: the requirement to ensure staff using and operating AI possess sufficient AI literacy.

All businesses caught by the EU AI Act’s jurisdictional scope – which is potentially very broad and may even exceed the scope of the GDPR – are now required to comply with these obligations.

Continue reading

Sweeping AI Legislation Under Consideration in Virginia

by Beth Waller and Patrick Austin

Photos of authors

Beth Burgin Waller and Patrick J. Austin (photos courtesy of Woods Rogers Vandeventer Black PLC)

Virginia, a leader in technology and privacy related regulations, is methodically examining artificial intelligence legislation.  In particular, significant legislation establishing a regulatory framework for high-risk Artificial Intelligence (AI) systems is currently being considered by the Virginia General Assembly’s Joint Commission on Technology and Science (JCOTS). JCOTs – a permanent legislative agency that studies and develops technology and science related policies in Virginia – has held several hearings on the topic in an effort to hear expertise related to AI issues and has formed an AI specific Subcommittee.  The JCOTS AI Subcommittee is considering two pieces of legislation that would govern the use of high-risk AI systems by public entities and private sector entities.

Continue reading

The Changing Approach in Compliance in the Tech Sector

by Florencia Marotta-Wurgler

Photo of author

Photo courtesy of author

Technological innovations such as generative artificial intelligence (AI), have come under increasing scrutiny from regulators in the U.S., the European Union, and beyond. This heightened oversight aims to ensure that companies implement strong privacy, safety, and design safeguards to protect users, and secure the data used in training advanced AI models. Some of these regulations have already or will soon come into effect. The European Union’s AI Act is expected to take effect in the second half of 2024, requiring firms to comply with regulations based on the risk level of their AI systems, including obligations for transparency, data governance, human oversight, and risk management for high-risk AI applications. Within the U.S., several states have enacted laws requiring app providers to verify users’ ages and regulate AI to protect users, especially children. At the federal level, proposed legislation like the Kids Online Safety Act (KOSA) and the American Data Privacy Protection Act (ADPPA) seeks to establish national standards for youth safety, data privacy, age verification, and AI transparency on digital platforms.

For many firms, these regulatory shifts have necessitated a complete reevaluation of their compliance strategies. Meta is a fresh example of how businesses may be navigating this evolving landscape. At their “Global Innovation and Policy” event on October 16 and 17, which gathered academics, technology leaders, and policy experts, Meta executives outlined their expanded compliance strategy.  This strategy now extends beyond privacy concerns to tackle broader regulatory challenges, such as AI governance, youth protection, and content moderation.

Continue reading

The EU AI Act is Officially Passed – What We Know and What’s Still Unclear

by Avi Gesser, Matt KellyRobert Maddox, and Martha Hirst 

Photos of authors.

From left to right: Avi Gesser, Matt Kelly, Robert Maddox, and Martha Hirst. (Photos courtesy of Debevoise & Plimpton LLP)

The EU AI Act (the “Act”) has made it through the EU’s legislative process and has passed into law; it will come into effect on 1 August 2024. Most of the substantive requirements will come into force two years later, from 1 August 2026, with the main exception being “Prohibited” AI systems, which will be banned from 1 February 2025.

Despite initial expectations of a sweeping and all-encompassing regulation, the final version of the Act reveals a narrower scope than some initially anticipated.

Continue reading

CNIL Publishes New Guidelines on the Development of AI Systems

by David Dumont and Tiago Sérgio Cabral

Photos of the authors

David Dumont and Tiago Sérgio Cabral (photos courtesy of Hunton Andrews Kurth LLP)

On June 7, 2024, following a public consultation, the French Data Protection Authority (the “CNIL”) published the final version of its guidelines addressing the development of AI systems from a data protection perspective (the “Guidelines”). Read our blog on the pre-public consultation version of these Guidelines.

In the Guidelines, the CNIL states that, in its view, the successful development of AI systems can be reconciled with the challenges of protecting privacy.

Continue reading

Recently Enacted AI Law in Colorado: Yet Another Reason to Implement an AI Governance Program

by Avi GesserErez Liebermann, Matt KellyMartha HirstAndreas Constantine PavlouCameron Sharp, and Annabella M. Waszkiewicz

Photos of the authors.

Top left to right: Avi Gesser, Erez Liebermann, Matt Kelly, and Martha Hirst. Bottom left to right: Andreas Constantine Pavlou, Cameron Sharp, and Annabella M. Waszkiewicz. (Photos courtesy of Debevoise & Plimpton LLP)

On May 17, 2024, Colorado passed Senate Bill 24-205 (“the Colorado AI Law” or “the Law”), a broad law regulating so-called high-risk AI systems that will become effective on February 1, 2026.  The law imposes sweeping obligations on both AI system deployers and developers doing business in Colorado, including a duty of reasonable care to protect Colorado residents from any known or reasonably foreseeable risks of algorithmic discrimination.

Continue reading

Limited-Risk AI—A Deep Dive Into Article 50 of the European Union’s AI Act

by Martin Braun, Anne Vallery, and Itsiq Benizri

Photo of the authors

Left to right: Martin Braun, Anne Vallery and Itsiq Benizri (photos courtesy of the authors)

This blog post focuses on the transparency requirements associated with certain limited-risk artificial intelligence (AI) systems under Article 50 of the European Union’s AI Act.

As explained in our previous blog post, the AI Act’s overall risk-based approach means that, depending on the level of risk, different requirements apply. In total, there are four levels of risk: (1) unacceptable risk, in which case AI systems are prohibited (see our blog post on prohibited AI practices for more details); (2) high risk, in which case AI systems are subject to extensive requirements, including regarding transparency; (3) limited risk, which triggers only transparency requirements; and (4) minimal risk, which does not trigger any obligations.

Continue reading

Mitigating AI Risks for Customer Service Chatbots

by Avi Gesser, Jim PastoreMatt KellyGabriel KohanMelissa Muse and Joshua A. Goland  

photos of authors

Top left to right: Avi Gesser, Jim Pastore, and Matt Kelly. Bottom left to right: Gabriel Kohan, Melissa Muse and Joshua A. Goland (photos courtesy of Debevoise & Plimpton LLP)

Online customer service chatbots have been around for years, allowing companies to triage customer queries with pre-programmed responses that addressed customers’ most common questions. Now, Generative AI (“GenAI”) chatbots have the potential to change the customer service landscape by answering a wider variety of questions, on a broader range of topics, and in a more nuanced and lifelike manner. Proponents of this technology argue companies can achieve better customer satisfaction while reducing costs of human-supported customer service. But the risks of irresponsible adoption of GenAI customer service chatbots, including increased litigation and reputational risk, could eclipse their promise.

We have previously discussed risks associated with adopting GenAI tools, as well as measures companies can implement to mitigate those risks. In this Debevoise Data Blog post, we focus on customer service chatbots and provide some practices that can help companies avoid legal and reputational risk when adopting such tools.

Continue reading