Tag Archives: Johanna N. Skrzypczyk

CPPA Proposed Rulemaking Package Part 1 – Cybersecurity Audits

by Avi Gesser, Matt Kelly, Johanna N. Skrzypczyk, H. Jacqueline Brehmer, Ned Terrace, Mengyi Xu, and Amer Mneimneh

Photos of the authors

Top: Avi Gesser, Matt Kelly, and Johanna N. Skrzypczyk,. Bottom: H. Jacqueline Brehmer, Ned Terrace, and Mengyi Xu. (Photos courtesy of Debevoise & Plimpton LLP)

Key Takeaways

  • On November 22, 2024, the California Privacy Protection Agency (CPPA) launched a formal public comment period on its draft regulations addressing annual cybersecurity audits and other privacy obligations under the California Consumer Privacy Act (CCPA).
  • These proposed rules aim to establish robust standards for thorough and independent cybersecurity audits, delineating both procedural and substantive requirements for businesses processing personal information.
  • In this update, we provide an overview of the new cybersecurity audit provisions, including key thresholds for applicability, detailed audit expectations, and the evolving regulatory landscape shaping cybersecurity compliance.

Continue reading

The Arrival of 2023 U.S. State Privacy Laws – Part 1: California Update

by Avi Gesser, Johanna Skrzypczyk, Michael R. Roberts, and Alessandra G. Masciandaro

The figure provides photos of the authors

From left to right: Avi Gesser, Johanna Skrzypczyk, Michael R. Roberts, and Alessandra G. Masciandaro

2023 has arrived, and with it comes a novel patchwork of privacy requirements arising out of comprehensive state privacy laws that have been adopted (or amended) by legislatures in California, Virginia, Colorado, Connecticut and Utah. Although privacy practitioners have been busy analyzing these laws and assisting clients with compliance efforts, rulemaking in California and Colorado has made this a moving target. We’ve previously blogged about how companies can prepare for these laws, and how enforcement and guidance under the GDPR might shed light on how some of these laws will be applied. In this series of posts, we will track key rulemaking developments as well as trends in compliance efforts, with practical takeaways for covered companies to consider as these laws, and the regulatory expectations around them, mature.

Continue reading

Legal Risks of Using AI Voice Analytics for Customer Service

by Avi Gesser, Johanna Skrzypczyk, Robert Maddox, Anna Gressel, Martha Hirst, and Kyle Kysela

Photos of the authors

From left to right: Avi Gesser, Johanna Skrzypczyk, Robert Maddox, Anna Gressel, Martha Hirst, and Kyle Kysela

There is a growing trend among customer-facing businesses towards using artificial intelligence (“AI”) to analyze voice data on customer calls. Companies are using these tools for various purposes including identity verification, targeted marketing, fraud detection, cost savings, and improved customer service. For example, AI voice analytics can detect whether a customer is very upset, and therefore should be promptly connected with an experienced customer service representative, or whether the person on the phone is not really the person they purport to be. These tools can also be used to assist customer service representatives in deescalating calls with upset customers by making real-time suggestions of phrases to use that only the customer service representative can hear, as well as evaluate the employee’s performance in dealing with a difficult customer (e.g., did the employee raise her voice, did she manage to get the customer to stop raising his voice, etc.).

Some of the more novel and controversial uses for AI voice analytics in customer service include (1) detecting whether a customer is being dishonest, (2) inferring a customer’s race, gender, or ethnicity, and (3) assessing when certain kinds of customers with particular concerns purchase certain goods or services, and developing a corresponding targeted marketing strategy.  

Continue reading

California’s Age-Appropriate Design Code Act Expands Businesses’ Privacy Obligations Regarding Minors

by Avi Gesser, Johanna N. Skrzypczyk, Michael R. Roberts, Michael J. Bloom, Martha Hirst, and Alessandra G. Masciandaro

On September 15, 2022, California Governor Gavin Newsom signed into law the bipartisan AB 2273, known as the California Age-Appropriate Design Code Act (“California Design Code”). The California Design Code aims to protect children online by imposing heightened obligations on any business that provides an online product, service, or feature “likely to be accessed by children.” Governor Newsom stated that he is “thankful to Assemblymembers Wicks and Cunningham and the tech industry for pushing these protections and putting the wellbeing of our kids first.”  The California Design Code’s business obligations take effect on July 1, 2024, though certain businesses must complete Data Protection Impact Assessments “on or before” that date.

In this post, we outline the California Design Code and its compliance requirements, compare it to pre-existing privacy regimes, and conclude with key takeaways for businesses to keep in mind as they adapt to the ever-changing privacy landscape.

Continue reading

Getting Ready for 2023: What Companies Can Do Now to Prepare for New Privacy Laws

by Jeremy Feigelson, Avi GesserJohanna Skrzypczyk, Michael Bloom, Michael R. Roberts, Tricia Reville, and Kate Saba

The Virginia Consumer Data Protection Act (“VCDPA”) and amendments to the California Consumer Privacy Act (“CCPA”)—enshrined in the California Privacy Rights Act (“CPRA”)—take effect on January 1, 2023.  In addition, the Colorado Privacy Act (“ColoPA”) takes effect on July 1, 2023.  These developments have companies understandably concerned about complying with a patchwork of state laws.

How can companies prepare?

Continue reading

Banking Regulators Finalize 36-Hour Data Breach Notification Rule

by Luke Dembosky, Avi Gesser, Satish Kini, Gregory Lyons, Johanna Skrzypczyk, Christopher Ford, Alex Mogul, and Erik Rubinstein

On November 18, 2021, federal banking regulators published a Final Rule that imposes new notification requirements on banking organizations for certain cybersecurity incidents.

Most significantly, the Final Rule requires that banking organizations notify their primary federal regulator within 36 hours after experiencing a material or potentially material cybersecurity event.

The Final Rule will go into effect on April 1, 2022, with a required compliance date of May 1, 2022.

The regulators – the Federal Deposit Insurance Corporation (“FDIC”), the Office of the Comptroller of the Currency (“OCC”) and the Federal Reserve Board (“FRB”) (together the “Agencies”) – first published a proposed rule about ten months ago, which we covered on the Data Blog. Much of the proposed rule was carried over into the Final Rule, but there are a few key differences, which we identify below.

Continue reading

The FTC’s Strengthened Safeguards Rule and the Evolving Landscape of Reasonable Data Security

by Jeremy Feigelson, Avi Gesser, Satish Kini, Johanna Skrzypczyk, Lily D. Vo, Corey Goldstein, and Scott M. Caravello

On October 27, 2021, the Federal Trade Commission (the “FTC”) announced significant updates to the Standards for Safeguarding Customer Information (PDF: 835 KB) (the “Safeguards Rule” or “Amended Rule”).  This rule, promulgated pursuant to the Gramm-Leach-Bliley Act, is designed to protect the consumer data collected by non-bank financial institutions, such as mortgage lenders and brokers, “pay day” lenders, and automobile dealerships, among many others (“subject financial institutions”).  The Amended Rule is likely to have a far-reaching ripple effect and inform the meaning of reasonable data security requirements industry-wide.  In this blog post, we highlight the Amended Rule’s more novel requirements and provide an overview of the potential impacts. 

Continue reading

Face Forward: Strategies for Complying with Facial Recognition Laws (Part II of II)

by Jeremy Feigelson, Avi Gesser, Anna Gressel, Andy Gutierrez, and Johanna Skrzypczyk

This is Part 2 in a two-part series of articles about facial recognition laws in the United States. In Part 1, we discussed how current legislation addresses facial recognition. In this part, we assess where the laws seem to be heading and offer some practical risk reduction strategies.

Continue reading

Face Forward: Strategies for Complying with Facial Recognition Laws (Part I of II)

by Jeremy Feigelson, Avi Gesser, Anna Gressel, Andy Gutierrez, and Johanna Skrzypczyk

This is Part I of a two-part post. 

Two huge cross-currents are sweeping the world of facial recognition—and head-on into each other. Companies are eagerly adopting facial recognition tools to better serve their customers, reduce their fraud risks, and manage their workforces. Meanwhile, legislatures and privacy advocates are pushing back hard. They challenge facial recognition as inherently overreaching, invasive of privacy, and prone to error and bias. Legal restrictions of different kinds have been enacted around the country, with more seemingly certain to come.

How will the tension sort itself out between new use cases on the one hand and the push for legal restrictions on the other – and when? And what’s a company to do right now, with facial recognition opportunities presenting themselves today while the law remains a moving target?

This two-part series aims to help. In this Part 1, we lay out the current laws governing facial recognition in the United States. In Part 2, we assess where the law is headed and offer some practical risk-reduction strategies.

Continue reading

The Supreme Court TransUnion Case: Part 2—What It Means for Efforts to Defeat Class Certification?

by Mark P. Goodman, Maura Kathleen Monaghan, Jim Pastore, Jacob W. Stahl, and Adam Aukland-Peck

This is Part 2 of a two-part article on the recent U.S. Supreme Court TransUnion decision.  In Part 1, we discussed the implications of the decision for standing in cyber cases. Continue reading