Tag Archives: Robert Maddox

The EU AI Act is Officially Passed – What We Know and What’s Still Unclear

by Avi Gesser, Matt KellyRobert Maddox, and Martha Hirst 

Photos of authors.

From left to right: Avi Gesser, Matt Kelly, Robert Maddox, and Martha Hirst. (Photos courtesy of Debevoise & Plimpton LLP)

The EU AI Act (the “Act”) has made it through the EU’s legislative process and has passed into law; it will come into effect on 1 August 2024. Most of the substantive requirements will come into force two years later, from 1 August 2026, with the main exception being “Prohibited” AI systems, which will be banned from 1 February 2025.

Despite initial expectations of a sweeping and all-encompassing regulation, the final version of the Act reveals a narrower scope than some initially anticipated.

Continue reading

EU Digital Operational Resilience Act (“DORA”): Incident and Cyber Threat Reporting and Considerations for Incident Response Plans

by Robert MaddoxStephanie ThomasAnnabella M. Waszkiewicz, and Michiko Wongso 

Photos of the authors

Left to right: Robert Maddox, Stephanie Thomas, Annabella M. Waszkiewicz, and Michiko Wongso (photos courtesy of Debevoise & Plimpton LLP)

With the EU Digital Operational Resilience Act (“DORA”) implementation deadline set for January 2025, many financial services firms are spending 2024 preparing for the new regime. Amongst many operational resilience and management oversight requirements, DORA will require covered entities to monitor for, identify, and classify Information and Communications Technology (“ICT”)-related incidents (“incidents”) and cyber threats and report them under certain circumstances to regulators, clients, and the public.

In this post, we take a closer look at DORA’s ICT-related incident and cyber threat reporting obligations (which can require notifications as fast as four hours) and how covered entities can prepare to address them within their existing incident response plans (“IRPs”).

For a more general overview of DORA’s requirements, please see our previous blog post here, along with our coverage of management obligations for covered entities under DORA and how DORA will impact fund managers and the insurance sector in Europe.

Continue reading

Eight GDPR Questions when Adopting Generative AI

by Avi Gesser, Robert Maddox, Friedrich Popp, and Martha Hirst

Photos of the authors

From left to right: Avi Gesser, Robert Maddox, Friedrich Popp, and Martha Hirst. (Photos courtesy of Debevoise & Plimpton LLP)

As businesses adopt Generative AI tools, they need to ensure that their governance frameworks address not only AI-specific regulations such as the forthcoming EU AI Act, but also existing regulations, including the EU and UK GDPR.

In this blog post, we outline eight questions businesses may want to ask when developing or adopting new Generative AI tools or when considering new use cases involving GDPR-covered data. At their core, they highlight the importance of integrating privacy-by-design default principles into Generative AI development and use cases (see here).

If privacy is dealt with as an afterthought, it may be difficult to retrofit controls that are sufficient to mitigate privacy-related risk and ensure compliance. Accordingly, businesses may want to involve privacy representatives in any AI governance committees. In addition, businesses that are developing their own AI tools may want to consider identifying opportunities to involve privacy experts in the early stages of Generative AI development planning.

Continue reading

EU Digital Operational Resilience Act (DORA): Management Obligations and the Role of the Board

by Robert Maddox and Tristan Lockwood

Photos of the authors

From left to right: Robert Maddox and Tristan Lockwood (photos courtesy of Debevoise & Plimpton LLP)

Back in November 2022, we highlighted the enactment of the EU’s Digital Operational Resilience Act (“DORA”) that will impose far-reaching operational resilience requirements and Board oversight requirements on almost all financial services firms regulated in the EU – including banks, insurers, payment services providers, crypto asset custodians, fund managers, among many others.  DORA also regulates critical service providers that, for the first time, will be directly regulated by EU financial services regulators. In this article, we take a closer look at the obligations DORA imposes on covered entity Boards.

Continue reading

Legal Risks of Using AI Voice Analytics for Customer Service

by Avi Gesser, Johanna Skrzypczyk, Robert Maddox, Anna Gressel, Martha Hirst, and Kyle Kysela

Photos of the authors

From left to right: Avi Gesser, Johanna Skrzypczyk, Robert Maddox, Anna Gressel, Martha Hirst, and Kyle Kysela

There is a growing trend among customer-facing businesses towards using artificial intelligence (“AI”) to analyze voice data on customer calls. Companies are using these tools for various purposes including identity verification, targeted marketing, fraud detection, cost savings, and improved customer service. For example, AI voice analytics can detect whether a customer is very upset, and therefore should be promptly connected with an experienced customer service representative, or whether the person on the phone is not really the person they purport to be. These tools can also be used to assist customer service representatives in deescalating calls with upset customers by making real-time suggestions of phrases to use that only the customer service representative can hear, as well as evaluate the employee’s performance in dealing with a difficult customer (e.g., did the employee raise her voice, did she manage to get the customer to stop raising his voice, etc.).

Some of the more novel and controversial uses for AI voice analytics in customer service include (1) detecting whether a customer is being dishonest, (2) inferring a customer’s race, gender, or ethnicity, and (3) assessing when certain kinds of customers with particular concerns purchase certain goods or services, and developing a corresponding targeted marketing strategy.  

Continue reading

New Automated Decision-Making Laws: Four Tips for Compliance

by Avi Gesser, Robert Maddox, Anna Gressel, Mengyi Xu, Samuel Allaman, Andres Gutierrez

With the widespread adoption of artificial intelligence (“AI”) and other complex algorithms across industries, many business decisions that used to be made by humans are now being made (either solely or primarily) by algorithms or models. Examples of automated decision-making (“ADM”) include determining:

  • Who gets an interview, a job, a promotion, or employment discipline;
  • Which ads get displayed for a user on a website or a social media feed;
  • Whether someone’s credit application should be approved, and at what interest rate;
  • Which investments should be made;
  • When a car should break or swerve to stay in a lane;
  • Which emails are spam and should not be read; and
  • Which transactions should be flagged or blocked as possibly fraudulent, money laundering, or in violation of sanctions regulations.

Continue reading

It’s Time to Take Credential Stuffing Seriously

by Jeremy Feigelson, Avi Gesser, Norma Angelica Freeland, Marc Ponchione, Gregory T. Larkin, and Robert Maddox

We have recently written about the persistence of the three most common cyber attacks: Ransomware, Phishing and Business Email Compromises (BECs) and the increased regulatory scrutiny that companies face when they fall victim to these attacks. Two recent developments demonstrate that credential stuffing is yet another serious cybersecurity risk that is on the rise and has the attention of regulators. First, on September 15, 2020, New York’s Attorney General, Letitia James, announced a $650,000 settlement with Dunkin’ Donuts, stemming from a 2015 security breach that targeted almost 20,000 customers using credential stuffing. Second, on the same day, the Securities and Exchange Commission’s Office of Compliance Inspections and Examinations (“OCIE”) issued a risk alert (the “Risk Alert”) on observed best practices by registered investment advisers and broker-dealers (together, “firms”) to protect customer accounts against credential stuffing. In this client update, we will discuss the cybersecurity and regulatory risks posed by credential stuffing and several ways to mitigate these risks.

Continue reading

Schrems II – Where are we now?

As covered in our previous blog post, the CJEU has invalidated the EU-U.S. Privacy Shield for cross-border transfers of personal data from the EU to the U.S. (the “Schrems II” decision) and cast significant doubts over whether companies can continue to use the European Commission-approved Standard Contractual Clauses (“SCCs”) to transfer EU personal data to the U.S., or to other jurisdictions with similarly broad surveillance regimes.

Continue reading

Schrems II: Privacy Shield Invalid and Severe Challenges for Standard Contractual Clauses

by , and  

Yesterday, the Court of Justice of the European Union (CJEU), the EU’s highest court, invalidated the EU-U.S. Privacy Shield for cross-border transfers of personal data.  The CJEU’s decision also cast significant doubts over whether companies can continue to use the European Commission-approved Standard Contractual Clauses (SCCs) to transfer EU personal data to the U.S., or to other jurisdictions with similarly broad surveillance regimes.  The CJEU’s lengthy decision is here and its short-form press release is here (PDF: 319.62 KB).

What does this mean for organizations that rely on Privacy Shield or SCCs?  History suggests that privacy enforcement authorities in the EU may hold their fire while efforts are made to come up with a replacement system for data transfers.  EU authorities hopefully will clarify their enforcement intentions soon.  In any event, organizations that have relied on Privacy Shield will have to turn immediately to considering what practical alternatives they might adopt.  U.S. government authorities will also have to turn to the knotty question of what data transfer mechanisms might ever satisfy the CJEU, given persistent EU concerns about U.S. government surveillance of personal data.

Continue reading

Preparing for and Responding to Ransomware Attacks: Thirteen Lessons from the NIST Framework and Recent Events

by Luke Dembosky, Avi Gesser, H Jacqueline BrehmerRobert Maddox, Dr. Friedrich Popp, and Mengyi Xu

Ransomware attacks continue to plague businesses across the globe. As companies enhance their defenses, attackers increase the sophistication of their software and its deployment. Ransomware attacks used to be limited to the locking of a company’s computer system by encryption software and a demand to pay in order to obtain the key, but not anymore.

In early June 2020, for example, the REvil ransomware group auctioned off three databases containing approximately 22,000 stolen files that were associated with a Canadian agricultural firm, for a starting price of $50,000, after the victim refused or failed to pay the ransom. This sale reflects a growing trend of ransomware attacks that includes theft of sensitive company data, along with the usual locking up of computer systems, as a means of amplifying the pressure on victim entities. As a result, companies that have operational backup systems, and therefore do not need to pay the ransom to get access to their data, may still consider paying in order to prevent the public release of their stolen confidential information.

Continue reading