Category Archives: Data Privacy

EU Court Upholds Commission’s Power To Demand Data Held by Foreign Companies

by Bill Batchelor, Ryan D. Junck, David A. Simon, Nicola Kerr-Shaw, Bora P. Rawcliffe, and Margot Seve

Photos of the authors

Top left to right: Bill Batchelor, Ryan D. Junck, and David A. Simon. Bottom left to right: Nicola Kerr-Shaw, Bora P. Rawcliffe, and Margot Seve (Photos courtesy of authors)

Summary

In Nuctech Warsaw (T-284/24), the EU Court of Justice held that EU subsidiaries can lawfully be required to provide access to email accounts and data held by their overseas parent company. The ruling involved the following framing:

  • Broad reach of EU extraterritorial investigative powers: The order interprets the European Commission’s (EC’s) investigative powers broadly. EU law applies to conduct with significant effects in the EU, even if the conduct occurs outside the EU. Consequently, the EC may request information from non-EU companies to assess potential EU law violations.
  • Implications for other EU enforcement regimes: The investigation was carried out under the EU Foreign Subsidies Regulation (FSR), but the ruling has implications for the EC’s powers under general antitrust rules and other regulations such as the Digital Markets Act or the Digital Services Act. The judgment follows divergent rulings in the UK that limited the extraterritorial reach of UK regulators’ enforcement powers in fraud and antitrust cases. (See our February 2021 alert “English Supreme Court Limits Serious Fraud Office’s Extraterritorial Reach” for more details.)
  • Siloing access to data within a corporate organization: The ruling held that there was no evidence local subsidiaries could not access China-held data, or that compliance with the EC’s inspection decision would compel the applicants and the group to infringe Chinese law, including criminal law. Therefore, companies should consider:
    • If their IT environment and procedures can be siloed to enable the company to demonstrate that accessing parent company data from the EU is not technically feasible without cooperation from the non-EU entities.
    • Whether law and regulation applicable to a company would prevent it from sharing this data with an EU regulator. If so, this should be well-documented in advance, potentially with external legal counsel validation, so that any refusal to comply with a request for data could be quickly substantiated with specific reference to other applicable laws.

Continue reading

Dutch Data Protection Authority Imposes a Fine of 290 Million Euros on Uber

by Sarah Pearce and Ashley Webber

Photos of authors.

Left to right: Sarah Pearce and Ashley Webber (Photos courtesy of the Hunton Andrews Kurth LLP)

On August 26, 2024, the Dutch Data Protection Authority (the “Dutch DPA”), as lead supervisory authority, announced that it had imposed a fine of 290 million euros ($324 million) on Uber.  The fine related to violations of the international transfer requirements under the EU General Data Protection Regulation (the “GDPR”). 

The Dutch DPA launched an investigation into Uber following complaints from more than 170 French Uber drivers to the French human rights interest group the Ligue des droits de l’Homme, which subsequently submitted a complaint to the French Data Protection Authority (the “CNIL”).  The CNIL then forwarded the complaints to the Dutch DPA as lead supervisory authority for Uber.

Continue reading

DOD’s CMMC 2.0 Program Takes Step Forward with Release of Contract Rule Proposal

by Beth Burgin Waller and Patrick J. Austin

Photos of authors.

Beth Burgin Waller and Patrick J. Austin (photos courtesy of Woods Rogers Vandeventer Black PLC)

The United States Department of Defense (DoD) took another big step on the path to instituting its highly anticipated Cybersecurity Maturity Model Certification 2.0 program (CMMC 2.0). Once finalized, CMMC 2.0 will establish and govern cybersecurity standards for defense contractors and subcontractors.

On August 15, 2024, DoD submitted a proposed rule that would implement CMMC 2.0 in the Defense Federal Acquisition Regulation Supplement (DFARS). The proposed DFARS rule effectively supplements DoD’s proposed rule published in December 2023 by providing guidance to contracting officers, setting forth a standard contract clause to be used in all contracts covered by the CMMC 2.0 program, DFARS 252.204-7021, and setting forth a standard solicitation provision that must be used solicitations for contracts covered by the CMMC 2.0 program, DFARS 252.204-7YYY (number to be added when the rule is finalized).

There is a 60-day comment period for the DFARS proposed rule, meaning individuals have until October 15, 2024, to provide public feedback on the proposal.

Continue reading

Treasury’s Report on AI (Part 2) – Managing AI-Specific Cybersecurity Risks in the Financial Sector

by Avi Gesser, Erez Liebermann, Matt Kelly, Jackie Dorward, and Joshua A. Goland

Photos of authors.

Top: Avi Gesser, Erez Liebermann, and Matt Kelly. Bottom: Jackie Dorward and Joshua A. Goland (Photos courtesy of Debevoise & Plimpton LLP)

This is the second post in the two-part Debevoise Data Blog series covering the U.S. Treasury Department’s report on Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector (the “Report”).

In Part 1, we addressed the Report’s coverage of the state of AI regulation and best practices recommendations for AI risk management and governance. In Part 2, we review the Report’s assessment of AI-enhanced cybersecurity risks, as well as the risks of attacks against AI systems, and offer guidance on how financial institutions can respond to both types of risks.

Continue reading

Does California’s Delete Act Have the “DROP” on Data Brokers?: Updates and Insights from the Recent Stakeholder Session

 by Christine E. Lyon, Christine Chong, Jackson Myers, and Ortal Isaac

Photos of the authors

From left to right: Christine E. Lyon, Christine Chong and Jackson Myers. (Photos courtesy of Freshfields Bruckhaus Deringer LLP)

The California Delete Act will make it easier for California consumers to request deletion of their personal information by so-called “data brokers,” a term that is much broader than companies may expect (see our prior blog post here). In particular, the Delete Act provides for a universal data deletion mechanism—known as the Data Broker Delete Requests and Opt-Out Platform, or “DROP”—that will allow any California consumer to make a single request for the deletion of their personal information by certain, or all, registered data brokers. In turn, by August 2026, data brokers will be required to regularly monitor, process, and honor deletion requests submitted through the DROP.

While the DROP’s policy objectives are fairly straightforward, it is less clear how the DROP will work in practice. For example, what measures will be taken to verify the identity of the consumer making the request, to ensure that the requesting party is the consumer they claim to be? What measures will be taken to verify that a person claiming to act as an authorized agent for a consumer actually has the right to request deletion of that consumer’s personal information? Unauthorized deletion of personal information may result in inconvenience or even loss or harm to individuals, which raises the stakes for the California Privacy Protection Agency (CPPA) as the agency responsible for building the DROP.

Continue reading

CNIL Publishes New Guidelines on the Development of AI Systems

by David Dumont and Tiago Sérgio Cabral

Photos of the authors

David Dumont and Tiago Sérgio Cabral (photos courtesy of Hunton Andrews Kurth LLP)

On June 7, 2024, following a public consultation, the French Data Protection Authority (the “CNIL”) published the final version of its guidelines addressing the development of AI systems from a data protection perspective (the “Guidelines”). Read our blog on the pre-public consultation version of these Guidelines.

In the Guidelines, the CNIL states that, in its view, the successful development of AI systems can be reconciled with the challenges of protecting privacy.

Continue reading

Incident Response Plans Are Now Accounting Controls? SEC Brings First-Ever Settled Cybersecurity Internal Controls Charges

by Andrew J. Ceresney, Charu A. Chandrasekhar, Luke Dembosky, Erez Liebermann, Benjamin R. Pedersen, Julie M. Riewe, Matt Kelly, and Anna Moody

Photos of the authors

Top left to right: Andrew J. Ceresney, Charu A. Chandrasekhar, Luke Dembosky and Erez Liebermann. Bottom left to right: Benjamin R. Pedersen, Julie M. Riewe, Matt Kelly and Anna Moody. (Photos courtesy of Debevoise & Plimpton LLP)

In an unprecedented settlement, on June 18, 2024, the U.S. Securities & Exchange Commission (the “SEC”) announced that communications and marketing provider R.R. Donnelley & Sons Co. (“RRD”) agreed to pay approximately $2.1 million to resolve charges arising out of its response to a 2021 ransomware attack. According to the SEC, RRD’s response to the attack revealed deficiencies in its cybersecurity policies and procedures and related disclosure controls. Specifically, in addition to asserting that RRD had failed to gather and review information about the incident for potential disclosure on a timely basis, the SEC alleged that RRD had failed to implement a “system of cybersecurity-related internal accounting controls” to provide reasonable assurances that access to the company’s assets—namely, its information technology systems and networks—was permitted only with management’s authorization. In particular, the SEC alleged that RRD failed to properly instruct the firm responsible for managing its cybersecurity alerts on how to prioritize such alerts, and then failed to act upon the incoming alerts from this firm.

Continue reading

Treasury and FSOC Sharpen Focus on Risks of AI in the Financial Sector

by Alison M. Hashmall, David Sewell, Beth George, Andrew Dockham, Megan M. Kayo and Nathaniel Balk

Photos of the authors

Top left to right: Alison M. Hashmall, David Sewell and Beth George. Bottom Left to Right: Andrew Dockham, Megan M. Kayo and Nathaniel Balk. (Photos courtesy of Freshfields Bruckhaus Deringer LLP)

On June 6-7, 2024, the Financial Stability Oversight Council (FSOC or the Council) cosponsored a conference on AI and financial stability with the Brookings Institution (the FSOC Conference).  The conference was billed as “an opportunity for the public and private sectors to convene to discuss potential systemic risks posed by AI in financial services, to explore the balance between encouraging innovation and mitigating risks, and to share insights on effective oversight of AI-related risks to financial stability.” The FSOC Conference featured noteworthy speeches by Secretary of the Treasury Janet Yellen (who chairs the Council), as well as Acting Comptroller of the Currency Michael Hsu.  And in a further sign of increased regulatory focus on AI in the financial industry, the Treasury Department also released a request for information on the Uses, Opportunities, and Risk of Artificial Intelligence (AI) in the Financial Services Sector (the AI RFI) while the conference was happening – its most recent, and most comprehensive, effort to understand how AI is being used in the financial industry.

In this blog post, we first summarize the key questions raised and topics addressed in the AI RFI.  We then summarize the key takeaways from FSOC’s conference on AI and discuss how these developments fit within the broader context of actions taken by the federal financial regulators in the AI space. Lastly, we lay out takeaways and the path ahead for financial institutions as they continue to navigate the rapid development of AI technology.

Continue reading

Recently Enacted AI Law in Colorado: Yet Another Reason to Implement an AI Governance Program

by Avi GesserErez Liebermann, Matt KellyMartha HirstAndreas Constantine PavlouCameron Sharp, and Annabella M. Waszkiewicz

Photos of the authors.

Top left to right: Avi Gesser, Erez Liebermann, Matt Kelly, and Martha Hirst. Bottom left to right: Andreas Constantine Pavlou, Cameron Sharp, and Annabella M. Waszkiewicz. (Photos courtesy of Debevoise & Plimpton LLP)

On May 17, 2024, Colorado passed Senate Bill 24-205 (“the Colorado AI Law” or “the Law”), a broad law regulating so-called high-risk AI systems that will become effective on February 1, 2026.  The law imposes sweeping obligations on both AI system deployers and developers doing business in Colorado, including a duty of reasonable care to protect Colorado residents from any known or reasonably foreseeable risks of algorithmic discrimination.

Continue reading

Land of 10,000 Data Lakes: Minnesota Consumer Data Privacy Act Signed into Law

by Nancy Libin, John D. Seiver, and Jevan Hutson

Photo of the authors.

From left to right: Nancy Libin, John D. Seiver, and Jevan Hutson. (Photos courtesy of Davis Wright Tremaine LLP)

Minnesota is the 18th state to enact a consumer data privacy law.

On May 25, 2024, Minnesota Governor Tim Walz signed the Minnesota Consumer Data Privacy Act (the “Act”), which takes effect on July 31, 2025, for most controllers and on July 31, 2029, for certain postsecondary educational institutions. Minnesota is the 18th state to enact a comprehensive consumer data privacy law.

The Act adopts the same framework as most other state privacy laws but includes several novel provisions, including broader rights for Minnesota residents who are subject to profiling in furtherance of decisions that produce legal or similarly significant effects.

We highlight key aspects of the Act below.

Continue reading