Category Archives: Data Management

EU Court Upholds Commission’s Power To Demand Data Held by Foreign Companies

by Bill Batchelor, Ryan D. Junck, David A. Simon, Nicola Kerr-Shaw, Bora P. Rawcliffe, and Margot Seve

Photos of the authors

Top left to right: Bill Batchelor, Ryan D. Junck, and David A. Simon. Bottom left to right: Nicola Kerr-Shaw, Bora P. Rawcliffe, and Margot Seve (Photos courtesy of authors)

Summary

In Nuctech Warsaw (T-284/24), the EU Court of Justice held that EU subsidiaries can lawfully be required to provide access to email accounts and data held by their overseas parent company. The ruling involved the following framing:

  • Broad reach of EU extraterritorial investigative powers: The order interprets the European Commission’s (EC’s) investigative powers broadly. EU law applies to conduct with significant effects in the EU, even if the conduct occurs outside the EU. Consequently, the EC may request information from non-EU companies to assess potential EU law violations.
  • Implications for other EU enforcement regimes: The investigation was carried out under the EU Foreign Subsidies Regulation (FSR), but the ruling has implications for the EC’s powers under general antitrust rules and other regulations such as the Digital Markets Act or the Digital Services Act. The judgment follows divergent rulings in the UK that limited the extraterritorial reach of UK regulators’ enforcement powers in fraud and antitrust cases. (See our February 2021 alert “English Supreme Court Limits Serious Fraud Office’s Extraterritorial Reach” for more details.)
  • Siloing access to data within a corporate organization: The ruling held that there was no evidence local subsidiaries could not access China-held data, or that compliance with the EC’s inspection decision would compel the applicants and the group to infringe Chinese law, including criminal law. Therefore, companies should consider:
    • If their IT environment and procedures can be siloed to enable the company to demonstrate that accessing parent company data from the EU is not technically feasible without cooperation from the non-EU entities.
    • Whether law and regulation applicable to a company would prevent it from sharing this data with an EU regulator. If so, this should be well-documented in advance, potentially with external legal counsel validation, so that any refusal to comply with a request for data could be quickly substantiated with specific reference to other applicable laws.

Continue reading

Dutch Data Protection Authority Imposes a Fine of 290 Million Euros on Uber

by Sarah Pearce and Ashley Webber

Photos of authors.

Left to right: Sarah Pearce and Ashley Webber (Photos courtesy of the Hunton Andrews Kurth LLP)

On August 26, 2024, the Dutch Data Protection Authority (the “Dutch DPA”), as lead supervisory authority, announced that it had imposed a fine of 290 million euros ($324 million) on Uber.  The fine related to violations of the international transfer requirements under the EU General Data Protection Regulation (the “GDPR”). 

The Dutch DPA launched an investigation into Uber following complaints from more than 170 French Uber drivers to the French human rights interest group the Ligue des droits de l’Homme, which subsequently submitted a complaint to the French Data Protection Authority (the “CNIL”).  The CNIL then forwarded the complaints to the Dutch DPA as lead supervisory authority for Uber.

Continue reading

DOD’s CMMC 2.0 Program Takes Step Forward with Release of Contract Rule Proposal

by Beth Burgin Waller and Patrick J. Austin

Photos of authors.

Beth Burgin Waller and Patrick J. Austin (photos courtesy of Woods Rogers Vandeventer Black PLC)

The United States Department of Defense (DoD) took another big step on the path to instituting its highly anticipated Cybersecurity Maturity Model Certification 2.0 program (CMMC 2.0). Once finalized, CMMC 2.0 will establish and govern cybersecurity standards for defense contractors and subcontractors.

On August 15, 2024, DoD submitted a proposed rule that would implement CMMC 2.0 in the Defense Federal Acquisition Regulation Supplement (DFARS). The proposed DFARS rule effectively supplements DoD’s proposed rule published in December 2023 by providing guidance to contracting officers, setting forth a standard contract clause to be used in all contracts covered by the CMMC 2.0 program, DFARS 252.204-7021, and setting forth a standard solicitation provision that must be used solicitations for contracts covered by the CMMC 2.0 program, DFARS 252.204-7YYY (number to be added when the rule is finalized).

There is a 60-day comment period for the DFARS proposed rule, meaning individuals have until October 15, 2024, to provide public feedback on the proposal.

Continue reading

The EU AI Act is Officially Passed – What We Know and What’s Still Unclear

by Avi Gesser, Matt KellyRobert Maddox, and Martha Hirst 

Photos of authors.

From left to right: Avi Gesser, Matt Kelly, Robert Maddox, and Martha Hirst. (Photos courtesy of Debevoise & Plimpton LLP)

The EU AI Act (the “Act”) has made it through the EU’s legislative process and has passed into law; it will come into effect on 1 August 2024. Most of the substantive requirements will come into force two years later, from 1 August 2026, with the main exception being “Prohibited” AI systems, which will be banned from 1 February 2025.

Despite initial expectations of a sweeping and all-encompassing regulation, the final version of the Act reveals a narrower scope than some initially anticipated.

Continue reading

Treasury’s Report on AI (Part 2) – Managing AI-Specific Cybersecurity Risks in the Financial Sector

by Avi Gesser, Erez Liebermann, Matt Kelly, Jackie Dorward, and Joshua A. Goland

Photos of authors.

Top: Avi Gesser, Erez Liebermann, and Matt Kelly. Bottom: Jackie Dorward and Joshua A. Goland (Photos courtesy of Debevoise & Plimpton LLP)

This is the second post in the two-part Debevoise Data Blog series covering the U.S. Treasury Department’s report on Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector (the “Report”).

In Part 1, we addressed the Report’s coverage of the state of AI regulation and best practices recommendations for AI risk management and governance. In Part 2, we review the Report’s assessment of AI-enhanced cybersecurity risks, as well as the risks of attacks against AI systems, and offer guidance on how financial institutions can respond to both types of risks.

Continue reading

Does California’s Delete Act Have the “DROP” on Data Brokers?: Updates and Insights from the Recent Stakeholder Session

 by Christine E. Lyon, Christine Chong, Jackson Myers, and Ortal Isaac

Photos of the authors

From left to right: Christine E. Lyon, Christine Chong and Jackson Myers. (Photos courtesy of Freshfields Bruckhaus Deringer LLP)

The California Delete Act will make it easier for California consumers to request deletion of their personal information by so-called “data brokers,” a term that is much broader than companies may expect (see our prior blog post here). In particular, the Delete Act provides for a universal data deletion mechanism—known as the Data Broker Delete Requests and Opt-Out Platform, or “DROP”—that will allow any California consumer to make a single request for the deletion of their personal information by certain, or all, registered data brokers. In turn, by August 2026, data brokers will be required to regularly monitor, process, and honor deletion requests submitted through the DROP.

While the DROP’s policy objectives are fairly straightforward, it is less clear how the DROP will work in practice. For example, what measures will be taken to verify the identity of the consumer making the request, to ensure that the requesting party is the consumer they claim to be? What measures will be taken to verify that a person claiming to act as an authorized agent for a consumer actually has the right to request deletion of that consumer’s personal information? Unauthorized deletion of personal information may result in inconvenience or even loss or harm to individuals, which raises the stakes for the California Privacy Protection Agency (CPPA) as the agency responsible for building the DROP.

Continue reading

CNIL Publishes New Guidelines on the Development of AI Systems

by David Dumont and Tiago Sérgio Cabral

Photos of the authors

David Dumont and Tiago Sérgio Cabral (photos courtesy of Hunton Andrews Kurth LLP)

On June 7, 2024, following a public consultation, the French Data Protection Authority (the “CNIL”) published the final version of its guidelines addressing the development of AI systems from a data protection perspective (the “Guidelines”). Read our blog on the pre-public consultation version of these Guidelines.

In the Guidelines, the CNIL states that, in its view, the successful development of AI systems can be reconciled with the challenges of protecting privacy.

Continue reading

Incident Response Plans Are Now Accounting Controls? SEC Brings First-Ever Settled Cybersecurity Internal Controls Charges

by Andrew J. Ceresney, Charu A. Chandrasekhar, Luke Dembosky, Erez Liebermann, Benjamin R. Pedersen, Julie M. Riewe, Matt Kelly, and Anna Moody

Photos of the authors

Top left to right: Andrew J. Ceresney, Charu A. Chandrasekhar, Luke Dembosky and Erez Liebermann. Bottom left to right: Benjamin R. Pedersen, Julie M. Riewe, Matt Kelly and Anna Moody. (Photos courtesy of Debevoise & Plimpton LLP)

In an unprecedented settlement, on June 18, 2024, the U.S. Securities & Exchange Commission (the “SEC”) announced that communications and marketing provider R.R. Donnelley & Sons Co. (“RRD”) agreed to pay approximately $2.1 million to resolve charges arising out of its response to a 2021 ransomware attack. According to the SEC, RRD’s response to the attack revealed deficiencies in its cybersecurity policies and procedures and related disclosure controls. Specifically, in addition to asserting that RRD had failed to gather and review information about the incident for potential disclosure on a timely basis, the SEC alleged that RRD had failed to implement a “system of cybersecurity-related internal accounting controls” to provide reasonable assurances that access to the company’s assets—namely, its information technology systems and networks—was permitted only with management’s authorization. In particular, the SEC alleged that RRD failed to properly instruct the firm responsible for managing its cybersecurity alerts on how to prioritize such alerts, and then failed to act upon the incoming alerts from this firm.

Continue reading

US Antitrust Regulators Threaten Ephemeral Messaging Users and Their Counsel with Obstruction Charges

by Jeremy Calsyn, Nowell Bamberger, Charles P. Balaan, and Joseph M. Kay

Photos of authors

Left to right: Jeremy Calsyn, Nowell Bamberger, Charles P. Balaan, and Joseph M. Kay (photos courtesy of Cleary Gottlieb Steen & Hamilton LLP)

In recent months, federal regulators have made statements that companies and their counsel may be subject to criminal prosecution if they fail to preserve ephemeral messaging data when they receive a subpoena or other legal process.  In January 2024, the Deputy Assistant Attorney General for Criminal Enforcement at the DOJ Antitrust Division warned “failure to produce” ephemeral messaging may result in obstruction charges.[1]  Speaking at the ABA Antitrust Spring Meeting in April 2024, a lawyer for the Antitrust Division echoed that the DOJ “will not hesitate to bring obstruction charges” against company counsel and their clients if clients fail to properly retain so-called “ephemeral messages.[2]  This is consistent with other recent warnings from the DOJ.[3]

The agencies’ focus on features of ephemeral messaging, which they argue can be used to hamper investigations, ignores the fact that ephemeral messaging applications have a legitimate role in the workplace where data security and management is paramount.  Despite the advantages of ephemeral messaging, clients should be aware of the legal and other risks presented by these applications and implement clear information retention policies that account for the organization’s duty to preserve information for litigation and government investigations. 

Continue reading

Recently Enacted AI Law in Colorado: Yet Another Reason to Implement an AI Governance Program

by Avi GesserErez Liebermann, Matt KellyMartha HirstAndreas Constantine PavlouCameron Sharp, and Annabella M. Waszkiewicz

Photos of the authors.

Top left to right: Avi Gesser, Erez Liebermann, Matt Kelly, and Martha Hirst. Bottom left to right: Andreas Constantine Pavlou, Cameron Sharp, and Annabella M. Waszkiewicz. (Photos courtesy of Debevoise & Plimpton LLP)

On May 17, 2024, Colorado passed Senate Bill 24-205 (“the Colorado AI Law” or “the Law”), a broad law regulating so-called high-risk AI systems that will become effective on February 1, 2026.  The law imposes sweeping obligations on both AI system deployers and developers doing business in Colorado, including a duty of reasonable care to protect Colorado residents from any known or reasonably foreseeable risks of algorithmic discrimination.

Continue reading