Category Archives: Data Privacy

Navigating GDPR Risks in AI: Insights from the EDPB’s latest Opinion & the UK ICO’s AI Consultation Response

by Dr. Christoph Werkmeister, Giles Pratt, Tristan Lockwood, and Dr. Benjamin Blum

In December 2024, the European Data Protection Board (EDPB) and the UK Information Commissioner’s Office (UK ICO) separately published significant guidance on the application of the GDPR to AI.

The EDPB’s Opinion 28/2024 had been much anticipated and generated significant media coverage, with headlines such as ‘AI developers don’t need permission to scoop up data, EU data watchdogs say (Politico). The UK ICO’s response to its year-long consultation on privacy issues in generative AI may have attracted less attention, but it also marked a significant development in how businesses should assess AI from a privacy perspective. 

Continue reading

DOJ Issues Final Rule Targeting Foreign Access to Americans’ Sensitive Data

by Michael T. Borgia and Assaf Ariely

Photos of Author

Michael T. Borgia and Assaf Ariely (photos courtesy of Davis Wright Tremaine LLP)

The U.S. Department of Justice (DOJ) has issued a comprehensive final rule (the “Rule”) targeting foreign access to sensitive U.S. data, including Americans’ “bulk” sensitive personal data.

The Rule, which DOJ announced on December 27, 2024, prohibits and restricts U.S. persons from entering into certain transactions involving access by “countries of concern” and “covered persons” to “bulk U.S. sensitive personal data” and “government-related data” by “countries of concern” and “covered persons.” “U.S. persons” subject to the Rule are defined broadly to include any U.S. citizen, national, or lawful person, any entity organized under the laws of the United States or any U.S. jurisdiction, and any person physically within the United States.

Continue reading

CPPA Proposed Rulemaking Package Part 1 – Cybersecurity Audits

by Avi Gesser, Matt Kelly, Johanna N. Skrzypczyk, H. Jacqueline Brehmer, Ned Terrace, Mengyi Xu, and Amer Mneimneh

Photos of the authors

Top: Avi Gesser, Matt Kelly, and Johanna N. Skrzypczyk,. Bottom: H. Jacqueline Brehmer, Ned Terrace, and Mengyi Xu. (Photos courtesy of Debevoise & Plimpton LLP)

Key Takeaways

  • On November 22, 2024, the California Privacy Protection Agency (CPPA) launched a formal public comment period on its draft regulations addressing annual cybersecurity audits and other privacy obligations under the California Consumer Privacy Act (CCPA).
  • These proposed rules aim to establish robust standards for thorough and independent cybersecurity audits, delineating both procedural and substantive requirements for businesses processing personal information.
  • In this update, we provide an overview of the new cybersecurity audit provisions, including key thresholds for applicability, detailed audit expectations, and the evolving regulatory landscape shaping cybersecurity compliance.

Continue reading

Protecting Consumers’ Location Data: Key Takeaways from Four Recent Cases

by Bhavna Changrani

Photo courtesy of the author

Photo courtesy of the author

Since the start of this year, the FTC has announced four groundbreaking cases addressing issues with how businesses collect and, in some cases misuse, people’s location data. If your business collects, buys, sells, or uses location data, take a minute to read about the FTC’s most recent enforcement actions against data brokers and aggregators — MobilewallaGravy/Venntel, InMarket, and X-Mode/Outlogic — and consider these takeaways:

Continue reading

CFPB Issues Final “Open Banking” Rule Requiring Covered Entities to Provide Consumers Access and Transferability of Financial Data

by Jarryd Anderson, Jessica S. Carey, John P. Carlin, Roberto J. Gonzalez, Brad S. Karp, and Kannon Shanmugam

Photos of authors

Top Left to Right: Jarryd Anderson, Jessica Carey, and John Carlin. Bottom Left to Right: Roberto Gonzalez, Brad Karp, and Kannon Shanmugam. (photos courtesy of Paul Weiss)

On October 22, 2024, the Consumer Financial Protection Bureau (“CFPB” or “Bureau”) published a 594-page Notice of Final Rulemaking for its “Personal Financial Data Rights” rule, commonly known as the “Open Banking” rule, which will require covered entities—generally, providers of checking and prepaid accounts, credit cards, digital wallets, and other payment facilitators—to provide consumers and consumer-authorized third parties with access to consumers’ financial data free of charge.[1] Covered entities are required to comply with uniform standards to provide access to this financial data through consumer and developer interfaces.[2] The rule imposes requirements on authorized third parties (such as fintechs), as well as data aggregators that facilitate access to consumers’ data, including required disclosures to consumers regarding the third parties’ use and retention of the requested data and a requirement that the data only be used in a manner reasonably necessary to provide the requested product or service (thus foreclosing selling the data or using it for targeted advertising or cross selling purposes).[3]

Continue reading

Irish Regulator Fines LinkedIn 310 Million Euros for GDPR Violations

by David Dumont and Tiago Sérgio Cabral

Photos of the authors

Left to right: David Dumont and Tiago Sérgio Cabral (Photos courtesy of the authors)

On October 24, 2024, the Irish Data Protection Commission (the “DPC”) announced that it had issued a fine of €310 million (approx. $335 million) against LinkedIn Ireland Unlimited Company (“LinkedIn”) for breaches of the EU General Data Protection Regulation (“GDPR”) related to transparency, fairness, and lawfulness in the context of the company’s processing of its users’ personal data for behavioral analysis and targeted advertising. In addition to the fine, the DPC also issued a reprimand and an order to bring processing into compliance.  

Continue reading

Managing Cybersecurity Risks Arising from AI — New Guidance from the NYDFS

by Charu A. Chandrasekhar, Luke Dembosky, Avi Gesser, Erez Liebermann, Marshal Bozzo, Johanna Skrzypczyk, Ned Terrace, and Mengyi Xu.

Photos of the authors

Top left to right: Charu A. Chandrasekhar, Luke Dembosky, Avi Gesser, and Erez Liebermann. 
Bottom left to right: Marshal Bozzo, Johanna Skrzypczyk, Ned Terrace, and Mengyi Xu. (Photos courtesy of Debevoise & Plimpton LLP)

On October 16, 2024, the New York Department of Financial Services (the “NYDFS”) issued an Industry Letter providing guidance on assessing cybersecurity risks associated with the use of AI (the “Guidance”) under the existing 23 NYCRR Part 500 (“Part 500” or “Cybersecurity Regulation”) framework. The Guidance applies to entities that are covered by Part 500 (i.e., entities with a license under the New York Banking Law, Insurance Law or Financial Services Law), but it provides valuable direction to all companies for managing the new cybersecurity risks associated with AI.

The NYDFS makes clear that the Guidance does not impose any new requirements beyond those already contained in the Cybersecurity Regulation. Instead, the Guidance is meant to explain how covered entities should use the Part 500 framework to address cybersecurity risks associated with AI and build controls to mitigate such risks. It also encourages companies to explore the potential cybersecurity benefits from integrating AI into cybersecurity tools (e.g., reviewing security logs and alerts, analyzing behavior, detecting anomalies, and predicting potential security threats). Entities that are covered by Part 500, especially those that have deployed AI in significant ways, should review the Guidance carefully, along with their current cybersecurity policies and controls, to see if any enhancements are appropriate.

Continue reading

The Changing Approach in Compliance in the Tech Sector

by Florencia Marotta-Wurgler

Photo of author

Photo courtesy of author

Technological innovations such as generative artificial intelligence (AI), have come under increasing scrutiny from regulators in the U.S., the European Union, and beyond. This heightened oversight aims to ensure that companies implement strong privacy, safety, and design safeguards to protect users, and secure the data used in training advanced AI models. Some of these regulations have already or will soon come into effect. The European Union’s AI Act is expected to take effect in the second half of 2024, requiring firms to comply with regulations based on the risk level of their AI systems, including obligations for transparency, data governance, human oversight, and risk management for high-risk AI applications. Within the U.S., several states have enacted laws requiring app providers to verify users’ ages and regulate AI to protect users, especially children. At the federal level, proposed legislation like the Kids Online Safety Act (KOSA) and the American Data Privacy Protection Act (ADPPA) seeks to establish national standards for youth safety, data privacy, age verification, and AI transparency on digital platforms.

For many firms, these regulatory shifts have necessitated a complete reevaluation of their compliance strategies. Meta is a fresh example of how businesses may be navigating this evolving landscape. At their “Global Innovation and Policy” event on October 16 and 17, which gathered academics, technology leaders, and policy experts, Meta executives outlined their expanded compliance strategy.  This strategy now extends beyond privacy concerns to tackle broader regulatory challenges, such as AI governance, youth protection, and content moderation.

Continue reading

Marriott’s Settlement with the FTC: What it Means for Businesses

by Katherine McCarron and Kamay Lafalaise

Photos of authors

Left to Right: Katherine McCarron and Kamay Lafalaise (photos courtesy of the authors)

Marriott International, Inc. has long highlighted core values of putting people first, pursuing excellence, acting with integrity, and serving the world. The FTC and Attorneys General from 49 states and D.C. are jointly announcing an action that suggests the company may want to add a fifth value to that list: protecting customer data and privacy. 

According to a proposed complaint, Marriott International, Inc. and its subsidiary Starwood Hotels & Resorts Worldwide, LLC had data security failures that led to at least three breaches between 2014 and 2020. First, the FTC says between 2014 and 2018 bad actors were able to take advantage of weak data security to steal 339 million consumer records from Marriott’s subsidiary, Starwood, in two separate breaches. That included millions of passport, payment card, and loyalty numbers. Then, in 2020, according to the complaint, Marriott told its customers bad actors had breached Marriott’s own network through a franchised hotel.  This time the intruders stole 5.2 million guest records, which included significant personal information and loyalty account information. The stolen information was detailed enough, the complaint explains, that bad actors could use it to create highly successful, targeted phishing campaigns to commit fraud.

Continue reading

CJEU: Competitors Can Sue over Data Protection Violations

by Dr. Detlev Gabel, Erasmus Hoffmann, and Markus Langen

Photos of authors

Left to Right: Dr. Detlev Gabel, Erasmus Hoffmann and Markus Langen (photos courtesy of White & Case LLP)

Background

The German Federal Court of Justice (Bundesgerichtshof), tasked with resolving a conflict between two competing pharmacists, sought guidance from the Court of Justice of the European Union (“CJEU”) on interpreting the General Data Protection Regulation (“GDPR”). The defendant’s business sells over-the-counter (“OTC”) medicinal products online. During the ordering process, customers must provide certain information, including their name, delivery address, and details about the relevant OTC product. Invoking German legislation on unfair commercial practices, the claimant, a competitor, asked the German courts to halt this practice of the competing pharmacy, unless there is assurance that customers give prior consent for the processing of their health-related data.

The courts at both the first and second instance determined that the ordering process involves processing of health data, which is prohibited under the GDPR in the absence of explicit customer consent or other justification. The courts found this practice to be in breach of the GDPR, and thus unfair and unlawful under the German Unfair Competition Act. The German Federal Court of Justice sought clarification on whether the GDPR allows national legislation to permit competitors to initiate legal action against a person allegedly violating the GDPR. Furthermore, it inquired if the information provided during the ordering process qualifies as health data under the GDPR, even though the relevant OTC products do not require a prescription.

In its judgement of October 4, 2024, the CJEU provided clarity on these issues.

Continue reading