Category Archives: Data Management

CFPB Issues Final “Open Banking” Rule Requiring Covered Entities to Provide Consumers Access and Transferability of Financial Data

by Jarryd Anderson, Jessica S. Carey, John P. Carlin, Roberto J. Gonzalez, Brad S. Karp, and Kannon Shanmugam

Photos of authors

Top Left to Right: Jarryd Anderson, Jessica Carey, and John Carlin. Bottom Left to Right: Roberto Gonzalez, Brad Karp, and Kannon Shanmugam. (photos courtesy of Paul Weiss)

On October 22, 2024, the Consumer Financial Protection Bureau (“CFPB” or “Bureau”) published a 594-page Notice of Final Rulemaking for its “Personal Financial Data Rights” rule, commonly known as the “Open Banking” rule, which will require covered entities—generally, providers of checking and prepaid accounts, credit cards, digital wallets, and other payment facilitators—to provide consumers and consumer-authorized third parties with access to consumers’ financial data free of charge.[1] Covered entities are required to comply with uniform standards to provide access to this financial data through consumer and developer interfaces.[2] The rule imposes requirements on authorized third parties (such as fintechs), as well as data aggregators that facilitate access to consumers’ data, including required disclosures to consumers regarding the third parties’ use and retention of the requested data and a requirement that the data only be used in a manner reasonably necessary to provide the requested product or service (thus foreclosing selling the data or using it for targeted advertising or cross selling purposes).[3]

Continue reading

Defense Department Unveils Final Rule for CMMC 2.0 Program: The Time Is Now for Defense Contractors To Get Compliant

by Beth Burgin Waller, Anthony Mazzeo, and Patrick Austin

Photos of the authors

Left to right: Beth Burgin Waller, Anthony Mazzeo, and Patrick Austin. (photos courtesy of authors)

If you work for a defense contractor or subcontractor responsible for handling controlled unclassified information (CUI) and/or federal contract information (FCI), the U.S. Department of Defense posted the final rule for the highly anticipated Cybersecurity Maturity Model Certification 2.0 program (CMMC 2.0 or the Final Rule).  Issuance of the Final Rule (full text available here in PDF format) likely means DoD will begin implementing new, stringent cybersecurity standards for defense contractors at some point in early-to-mid 2025.   

Continue reading

Irish Regulator Fines LinkedIn 310 Million Euros for GDPR Violations

by David Dumont and Tiago Sérgio Cabral

Photos of the authors

Left to right: David Dumont and Tiago Sérgio Cabral (Photos courtesy of the authors)

On October 24, 2024, the Irish Data Protection Commission (the “DPC”) announced that it had issued a fine of €310 million (approx. $335 million) against LinkedIn Ireland Unlimited Company (“LinkedIn”) for breaches of the EU General Data Protection Regulation (“GDPR”) related to transparency, fairness, and lawfulness in the context of the company’s processing of its users’ personal data for behavioral analysis and targeted advertising. In addition to the fine, the DPC also issued a reprimand and an order to bring processing into compliance.  

Continue reading

Managing Cybersecurity Risks Arising from AI — New Guidance from the NYDFS

by Charu A. Chandrasekhar, Luke Dembosky, Avi Gesser, Erez Liebermann, Marshal Bozzo, Johanna Skrzypczyk, Ned Terrace, and Mengyi Xu.

Photos of the authors

Top left to right: Charu A. Chandrasekhar, Luke Dembosky, Avi Gesser, and Erez Liebermann. 
Bottom left to right: Marshal Bozzo, Johanna Skrzypczyk, Ned Terrace, and Mengyi Xu. (Photos courtesy of Debevoise & Plimpton LLP)

On October 16, 2024, the New York Department of Financial Services (the “NYDFS”) issued an Industry Letter providing guidance on assessing cybersecurity risks associated with the use of AI (the “Guidance”) under the existing 23 NYCRR Part 500 (“Part 500” or “Cybersecurity Regulation”) framework. The Guidance applies to entities that are covered by Part 500 (i.e., entities with a license under the New York Banking Law, Insurance Law or Financial Services Law), but it provides valuable direction to all companies for managing the new cybersecurity risks associated with AI.

The NYDFS makes clear that the Guidance does not impose any new requirements beyond those already contained in the Cybersecurity Regulation. Instead, the Guidance is meant to explain how covered entities should use the Part 500 framework to address cybersecurity risks associated with AI and build controls to mitigate such risks. It also encourages companies to explore the potential cybersecurity benefits from integrating AI into cybersecurity tools (e.g., reviewing security logs and alerts, analyzing behavior, detecting anomalies, and predicting potential security threats). Entities that are covered by Part 500, especially those that have deployed AI in significant ways, should review the Guidance carefully, along with their current cybersecurity policies and controls, to see if any enhancements are appropriate.

Continue reading

From Art to Science: Unveiling the Transformative Power of AI in Surveys and Interviews

by Madison Leonard, Michael Costa, and Jonny Frank

Left to right: Madison Leonard, Michael Costa and Jonny Frank. (Photos courtesy of StoneTurn)

Far from mere tools, employee surveys and interviews serve as key indicators of an organization’s overall health and success. They play a pivotal role in assessing corporate culture, gauging satisfaction, gathering feedback, improving retention, and measuring diversity. Surveys can quantify sentiment—often over multiple periods and population segments. Interviews supplement surveys by offering nuance, context, the opportunity to follow up and even multiple perspectives on a single topic. These qualitative responses are essential for identifying subtle concerns, uncovering issues not explicitly covered by standard questions, and understanding employee sentiment. Without these insights, assessments risk missing critical information about how employees feel about governance, compliance and other key topics.

Despite their importance, surveys and interviews come with their own set of challenges. They are resource-intensive and susceptible to human error. The process of choosing survey topics and questions is a delicate one. While quantitative survey results are clear-cut, the qualitative feedback from interviews and focus groups, often in the form of lengthy, nuanced responses, demands significant mental effort to categorize and interpret. The larger the dataset, the higher the risk of confirmation bias, where reviewers may focus on responses that align with their expectations or miss critical patterns in the data. This bias, coupled with the sheer volume of information to be processed, makes it difficult to conduct thorough and objective assessments, especially under time and budget constraints.

Artificial Intelligence (AI), specifically Large Language Models (LLMS), presents a compelling solution to these challenges. By automating both quantitative and qualitative data analysis, AI models can swiftly sift through large datasets, identifying patterns, contradictions, and sentiment across thousands of responses in a fraction of the time it would take a human reviewer. This time-saving aspect of AI is particularly beneficial in today’s fast-paced business environment, where efficiency and productivity are paramount.

In addition to speed, AI helps mitigate the risk of human error and bias. It allows professionals to review qualitative data thoroughly and impartially. Unlike the human eye, a well-trained AI model can efficiently process voluminous and complex text, uncovering insights without overlooking critical information.

But, as powerful as AI may be, human judgment and experience remain critical. Human input is necessary to refine, train and ultimately verify the accuracy of an AI model.

Continue reading

The Changing Approach in Compliance in the Tech Sector

by Florencia Marotta-Wurgler

Photo of author

Photo courtesy of author

Technological innovations such as generative artificial intelligence (AI), have come under increasing scrutiny from regulators in the U.S., the European Union, and beyond. This heightened oversight aims to ensure that companies implement strong privacy, safety, and design safeguards to protect users, and secure the data used in training advanced AI models. Some of these regulations have already or will soon come into effect. The European Union’s AI Act is expected to take effect in the second half of 2024, requiring firms to comply with regulations based on the risk level of their AI systems, including obligations for transparency, data governance, human oversight, and risk management for high-risk AI applications. Within the U.S., several states have enacted laws requiring app providers to verify users’ ages and regulate AI to protect users, especially children. At the federal level, proposed legislation like the Kids Online Safety Act (KOSA) and the American Data Privacy Protection Act (ADPPA) seeks to establish national standards for youth safety, data privacy, age verification, and AI transparency on digital platforms.

For many firms, these regulatory shifts have necessitated a complete reevaluation of their compliance strategies. Meta is a fresh example of how businesses may be navigating this evolving landscape. At their “Global Innovation and Policy” event on October 16 and 17, which gathered academics, technology leaders, and policy experts, Meta executives outlined their expanded compliance strategy.  This strategy now extends beyond privacy concerns to tackle broader regulatory challenges, such as AI governance, youth protection, and content moderation.

Continue reading

Marriott’s Settlement with the FTC: What it Means for Businesses

by Katherine McCarron and Kamay Lafalaise

Photos of authors

Left to Right: Katherine McCarron and Kamay Lafalaise (photos courtesy of the authors)

Marriott International, Inc. has long highlighted core values of putting people first, pursuing excellence, acting with integrity, and serving the world. The FTC and Attorneys General from 49 states and D.C. are jointly announcing an action that suggests the company may want to add a fifth value to that list: protecting customer data and privacy. 

According to a proposed complaint, Marriott International, Inc. and its subsidiary Starwood Hotels & Resorts Worldwide, LLC had data security failures that led to at least three breaches between 2014 and 2020. First, the FTC says between 2014 and 2018 bad actors were able to take advantage of weak data security to steal 339 million consumer records from Marriott’s subsidiary, Starwood, in two separate breaches. That included millions of passport, payment card, and loyalty numbers. Then, in 2020, according to the complaint, Marriott told its customers bad actors had breached Marriott’s own network through a franchised hotel.  This time the intruders stole 5.2 million guest records, which included significant personal information and loyalty account information. The stolen information was detailed enough, the complaint explains, that bad actors could use it to create highly successful, targeted phishing campaigns to commit fraud.

Continue reading

CJEU: Competitors Can Sue over Data Protection Violations

by Dr. Detlev Gabel, Erasmus Hoffmann, and Markus Langen

Photos of authors

Left to Right: Dr. Detlev Gabel, Erasmus Hoffmann and Markus Langen (photos courtesy of White & Case LLP)

Background

The German Federal Court of Justice (Bundesgerichtshof), tasked with resolving a conflict between two competing pharmacists, sought guidance from the Court of Justice of the European Union (“CJEU”) on interpreting the General Data Protection Regulation (“GDPR”). The defendant’s business sells over-the-counter (“OTC”) medicinal products online. During the ordering process, customers must provide certain information, including their name, delivery address, and details about the relevant OTC product. Invoking German legislation on unfair commercial practices, the claimant, a competitor, asked the German courts to halt this practice of the competing pharmacy, unless there is assurance that customers give prior consent for the processing of their health-related data.

The courts at both the first and second instance determined that the ordering process involves processing of health data, which is prohibited under the GDPR in the absence of explicit customer consent or other justification. The courts found this practice to be in breach of the GDPR, and thus unfair and unlawful under the German Unfair Competition Act. The German Federal Court of Justice sought clarification on whether the GDPR allows national legislation to permit competitors to initiate legal action against a person allegedly violating the GDPR. Furthermore, it inquired if the information provided during the ordering process qualifies as health data under the GDPR, even though the relevant OTC products do not require a prescription.

In its judgement of October 4, 2024, the CJEU provided clarity on these issues.

Continue reading

ICO Dawn Raids: How to respond and what you can do to prepare – An FAQ

by Robert Maddox and Aisling Cowell

Left to Right: Robert Maddox and Aisling Cowell (photos courtesy of Debevoise & Plimpton LLP)

In the UK, unannounced inspections of businesses’ premises, or “dawn raids”, are most often associated with authorities such as the Serious Fraud Office, National Crime Agency, Competition and Markets Authority and Metropolitan Police. However, data controllers and processers should be aware that the UK’s Information Commissioner’s Office (“ICO”) can also carry out dawn raids as part of investigations into compliance with data protection laws.

Such inspections can be stressful and complex for businesses to respond to, with a risk of criminal liability for failing to cooperate properly.

Here, we examine the ICO’s powers to conduct dawn raids, how those powers have been exercised in the past, and outline the steps which businesses should consider taking to prepare effectively for – and appropriately respond to – dawn raids.

Continue reading

T-Mobile to Spend 31.5 Million Dollars to Settle Multiple FCC Investigations Related to Recent Data Breaches

by Lisa Sotto and Jennie Cunningham

Photos of the speakers

Left to right: Lisa Sotto and Jennie Cunningham. (Photos courtesy of Hunton Andrews Kurth LLP)

On September 30, 2024, the Federal Communications Commission announced that T-Mobile has entered into an agreement to settle multiple data protection and cybersecurity investigations stemming from data breaches in 2021, 2022 and 2023. The breaches involved the personal information of millions of current, former, and prospective T-Mobile customers and end-user customers of T-Mobile wireless network operators, and resulted from various threat vectors, including a 2021 cyberattack, a 2022 platform access incident, a 2023 sales application incident, and a 2023 API incident. T-Mobile previously settled class action claims in federal district court related to the 2021 cyberattack. In addition to a $15.75 million penalty, T-Mobile also will be required to spend $15.75 million over the next two years to strengthen its cybersecurity program and implement a plan to protect consumers from similar future breaches. Continue reading