Tag Archives: Martha Hirst

The EU AI Act is Officially Passed – What We Know and What’s Still Unclear

by Avi Gesser, Matt KellyRobert Maddox, and Martha Hirst 

Photos of authors.

From left to right: Avi Gesser, Matt Kelly, Robert Maddox, and Martha Hirst. (Photos courtesy of Debevoise & Plimpton LLP)

The EU AI Act (the “Act”) has made it through the EU’s legislative process and has passed into law; it will come into effect on 1 August 2024. Most of the substantive requirements will come into force two years later, from 1 August 2026, with the main exception being “Prohibited” AI systems, which will be banned from 1 February 2025.

Despite initial expectations of a sweeping and all-encompassing regulation, the final version of the Act reveals a narrower scope than some initially anticipated.

Continue reading

Recently Enacted AI Law in Colorado: Yet Another Reason to Implement an AI Governance Program

by Avi GesserErez Liebermann, Matt KellyMartha HirstAndreas Constantine PavlouCameron Sharp, and Annabella M. Waszkiewicz

Photos of the authors.

Top left to right: Avi Gesser, Erez Liebermann, Matt Kelly, and Martha Hirst. Bottom left to right: Andreas Constantine Pavlou, Cameron Sharp, and Annabella M. Waszkiewicz. (Photos courtesy of Debevoise & Plimpton LLP)

On May 17, 2024, Colorado passed Senate Bill 24-205 (“the Colorado AI Law” or “the Law”), a broad law regulating so-called high-risk AI systems that will become effective on February 1, 2026.  The law imposes sweeping obligations on both AI system deployers and developers doing business in Colorado, including a duty of reasonable care to protect Colorado residents from any known or reasonably foreseeable risks of algorithmic discrimination.

Continue reading

The EU AI Act – Navigating the EU’s Legislative Labyrinth

by Avi GesserMatt KellyMartha HirstSamuel J. AllamanMelissa Muse, and Samuel Thomson

From left to right: Avi Gesser, Matt Kelly, Martha Hirst, Samuel J. Allaman, and Melissa Muse. Not pictured: Samuel Thomson. (Photos courtesy of Debevoise & Plimpton LLP).

As legislators and regulators around the world are trying to determine how to approach the novel risks and opportunities that AI technologies present, the draft European Union Artificial Intelligence Act (the “EU AI Act” or the “Act”) is a highly anticipated step towards the future of AI regulation. Despite recent challenges in the EU “trilogue negotiations”, proponents still hope to reach a compromise on the key terms by 6th December, with a view to passing the Act in 2024 and most of the provisions becoming effective sometime in 2026.

As one of the few well-progressed AI-specific laws currently in existence, the EU AI Act has generated substantial global attention. Analogous to the influential role played by the EU’s GDPR in shaping the contours of global data privacy laws, the EU AI Act similarly has the potential to influence the worldwide evolution of AI regulation.

This blog post summarizes the complexities of the EU legislative process to explain the current status of, and next steps for, the draft EU AI Act. It also includes steps which businesses may want to start taking now in preparation of incoming AI regulation.

Continue reading

Eight GDPR Questions when Adopting Generative AI

by Avi Gesser, Robert Maddox, Friedrich Popp, and Martha Hirst

Photos of the authors

From left to right: Avi Gesser, Robert Maddox, Friedrich Popp, and Martha Hirst. (Photos courtesy of Debevoise & Plimpton LLP)

As businesses adopt Generative AI tools, they need to ensure that their governance frameworks address not only AI-specific regulations such as the forthcoming EU AI Act, but also existing regulations, including the EU and UK GDPR.

In this blog post, we outline eight questions businesses may want to ask when developing or adopting new Generative AI tools or when considering new use cases involving GDPR-covered data. At their core, they highlight the importance of integrating privacy-by-design default principles into Generative AI development and use cases (see here).

If privacy is dealt with as an afterthought, it may be difficult to retrofit controls that are sufficient to mitigate privacy-related risk and ensure compliance. Accordingly, businesses may want to involve privacy representatives in any AI governance committees. In addition, businesses that are developing their own AI tools may want to consider identifying opportunities to involve privacy experts in the early stages of Generative AI development planning.

Continue reading

Legal Risks of Using AI Voice Analytics for Customer Service

by Avi Gesser, Johanna Skrzypczyk, Robert Maddox, Anna Gressel, Martha Hirst, and Kyle Kysela

Photos of the authors

From left to right: Avi Gesser, Johanna Skrzypczyk, Robert Maddox, Anna Gressel, Martha Hirst, and Kyle Kysela

There is a growing trend among customer-facing businesses towards using artificial intelligence (“AI”) to analyze voice data on customer calls. Companies are using these tools for various purposes including identity verification, targeted marketing, fraud detection, cost savings, and improved customer service. For example, AI voice analytics can detect whether a customer is very upset, and therefore should be promptly connected with an experienced customer service representative, or whether the person on the phone is not really the person they purport to be. These tools can also be used to assist customer service representatives in deescalating calls with upset customers by making real-time suggestions of phrases to use that only the customer service representative can hear, as well as evaluate the employee’s performance in dealing with a difficult customer (e.g., did the employee raise her voice, did she manage to get the customer to stop raising his voice, etc.).

Some of the more novel and controversial uses for AI voice analytics in customer service include (1) detecting whether a customer is being dishonest, (2) inferring a customer’s race, gender, or ethnicity, and (3) assessing when certain kinds of customers with particular concerns purchase certain goods or services, and developing a corresponding targeted marketing strategy.  

Continue reading

California’s Age-Appropriate Design Code Act Expands Businesses’ Privacy Obligations Regarding Minors

by Avi Gesser, Johanna N. Skrzypczyk, Michael R. Roberts, Michael J. Bloom, Martha Hirst, and Alessandra G. Masciandaro

On September 15, 2022, California Governor Gavin Newsom signed into law the bipartisan AB 2273, known as the California Age-Appropriate Design Code Act (“California Design Code”). The California Design Code aims to protect children online by imposing heightened obligations on any business that provides an online product, service, or feature “likely to be accessed by children.” Governor Newsom stated that he is “thankful to Assemblymembers Wicks and Cunningham and the tech industry for pushing these protections and putting the wellbeing of our kids first.”  The California Design Code’s business obligations take effect on July 1, 2024, though certain businesses must complete Data Protection Impact Assessments “on or before” that date.

In this post, we outline the California Design Code and its compliance requirements, compare it to pre-existing privacy regimes, and conclude with key takeaways for businesses to keep in mind as they adapt to the ever-changing privacy landscape.

Continue reading

UK Introduces Magnitsky-Style Human Rights Sanctions Regime

by Karolos Seeger, Jane Shvets, Catherine Amirfar, Andrew M. Levine, Natalie L. Reid, David W. Rivkin, Alan KartashkinKonstantin Bureiko, and Martha Hirst

On 6 July 2020, the UK implemented a new sanctions regime targeting global human rights abuses, which allows the UK government to impose asset freezes and travel bans on persons it determines to have committed serious human rights violations. These restrictions have initially targeted 49 persons from Myanmar, Russia, Saudi Arabia and North Korea.

This is the first time since Brexit that the UK has diverged from EU sanctions policy. Although many of the targets and restrictions are broadly aligned with the “Magnitsky”-style sanctions previously implemented by the United States and Canada, the UK regime has some important differences. Companies operating in the UK will need to ensure that their sanctions systems and controls reflect this new regime.

Continue reading

The EPPO and International Co-Operation –– New Kid on the Block

by Karolos Seeger, Jane Shvets, Robin Lööf, Alma M. Mozetič, Martha Hirst, Antoine Kirry, Alexandre Bisch, Ariane Fleuriot, Dr. Thomas Schürrle, Dr. Friedrich Popp, Dr. Oliver Krauß

The European Public Prosecutor’s Office (“EPPO”) is a new European Union body responsible for investigating and prosecuting criminal offences affecting the EU’s financial interests in 22 of its 28 Member States.[1] The EPPO is expected to begin investigations in November 2020.

Fraud against the financial interests of the EU is an international phenomenon: in 2018, the European Anti-Fraud Office (“OLAF”) concluded 84 investigations into the use of EU funds, 37 of which concerned countries outside the EU.[2] In this part of our series of analyses of the EPPO[3] we, therefore, consider the framework for the EPPO’s future international co-operation. This includes dealings with enforcement authorities in non-participating EU Member States as well as the rest of the world.

Continue reading