Tag Archives: Matt Kelly

CPPA Proposed Rulemaking Package Part 1 – Cybersecurity Audits

by Avi Gesser, Matt Kelly, Johanna N. Skrzypczyk, H. Jacqueline Brehmer, Ned Terrace, Mengyi Xu, and Amer Mneimneh

Photos of the authors

Top: Avi Gesser, Matt Kelly, and Johanna N. Skrzypczyk,. Bottom: H. Jacqueline Brehmer, Ned Terrace, and Mengyi Xu. (Photos courtesy of Debevoise & Plimpton LLP)

Key Takeaways

  • On November 22, 2024, the California Privacy Protection Agency (CPPA) launched a formal public comment period on its draft regulations addressing annual cybersecurity audits and other privacy obligations under the California Consumer Privacy Act (CCPA).
  • These proposed rules aim to establish robust standards for thorough and independent cybersecurity audits, delineating both procedural and substantive requirements for businesses processing personal information.
  • In this update, we provide an overview of the new cybersecurity audit provisions, including key thresholds for applicability, detailed audit expectations, and the evolving regulatory landscape shaping cybersecurity compliance.

Continue reading

The EU AI Act is Officially Passed – What We Know and What’s Still Unclear

by Avi Gesser, Matt KellyRobert Maddox, and Martha Hirst 

Photos of authors.

From left to right: Avi Gesser, Matt Kelly, Robert Maddox, and Martha Hirst. (Photos courtesy of Debevoise & Plimpton LLP)

The EU AI Act (the “Act”) has made it through the EU’s legislative process and has passed into law; it will come into effect on 1 August 2024. Most of the substantive requirements will come into force two years later, from 1 August 2026, with the main exception being “Prohibited” AI systems, which will be banned from 1 February 2025.

Despite initial expectations of a sweeping and all-encompassing regulation, the final version of the Act reveals a narrower scope than some initially anticipated.

Continue reading

Treasury’s Report on AI (Part 2) – Managing AI-Specific Cybersecurity Risks in the Financial Sector

by Avi Gesser, Erez Liebermann, Matt Kelly, Jackie Dorward, and Joshua A. Goland

Photos of authors.

Top: Avi Gesser, Erez Liebermann, and Matt Kelly. Bottom: Jackie Dorward and Joshua A. Goland (Photos courtesy of Debevoise & Plimpton LLP)

This is the second post in the two-part Debevoise Data Blog series covering the U.S. Treasury Department’s report on Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector (the “Report”).

In Part 1, we addressed the Report’s coverage of the state of AI regulation and best practices recommendations for AI risk management and governance. In Part 2, we review the Report’s assessment of AI-enhanced cybersecurity risks, as well as the risks of attacks against AI systems, and offer guidance on how financial institutions can respond to both types of risks.

Continue reading

Incident Response Plans Are Now Accounting Controls? SEC Brings First-Ever Settled Cybersecurity Internal Controls Charges

by Andrew J. Ceresney, Charu A. Chandrasekhar, Luke Dembosky, Erez Liebermann, Benjamin R. Pedersen, Julie M. Riewe, Matt Kelly, and Anna Moody

Photos of the authors

Top left to right: Andrew J. Ceresney, Charu A. Chandrasekhar, Luke Dembosky and Erez Liebermann. Bottom left to right: Benjamin R. Pedersen, Julie M. Riewe, Matt Kelly and Anna Moody. (Photos courtesy of Debevoise & Plimpton LLP)

In an unprecedented settlement, on June 18, 2024, the U.S. Securities & Exchange Commission (the “SEC”) announced that communications and marketing provider R.R. Donnelley & Sons Co. (“RRD”) agreed to pay approximately $2.1 million to resolve charges arising out of its response to a 2021 ransomware attack. According to the SEC, RRD’s response to the attack revealed deficiencies in its cybersecurity policies and procedures and related disclosure controls. Specifically, in addition to asserting that RRD had failed to gather and review information about the incident for potential disclosure on a timely basis, the SEC alleged that RRD had failed to implement a “system of cybersecurity-related internal accounting controls” to provide reasonable assurances that access to the company’s assets—namely, its information technology systems and networks—was permitted only with management’s authorization. In particular, the SEC alleged that RRD failed to properly instruct the firm responsible for managing its cybersecurity alerts on how to prioritize such alerts, and then failed to act upon the incoming alerts from this firm.

Continue reading

Recently Enacted AI Law in Colorado: Yet Another Reason to Implement an AI Governance Program

by Avi GesserErez Liebermann, Matt KellyMartha HirstAndreas Constantine PavlouCameron Sharp, and Annabella M. Waszkiewicz

Photos of the authors.

Top left to right: Avi Gesser, Erez Liebermann, Matt Kelly, and Martha Hirst. Bottom left to right: Andreas Constantine Pavlou, Cameron Sharp, and Annabella M. Waszkiewicz. (Photos courtesy of Debevoise & Plimpton LLP)

On May 17, 2024, Colorado passed Senate Bill 24-205 (“the Colorado AI Law” or “the Law”), a broad law regulating so-called high-risk AI systems that will become effective on February 1, 2026.  The law imposes sweeping obligations on both AI system deployers and developers doing business in Colorado, including a duty of reasonable care to protect Colorado residents from any known or reasonably foreseeable risks of algorithmic discrimination.

Continue reading

Treasury’s Report on AI (Part 1) – Governance and Risk Management

by Charu A. Chandrasekhar, Avi Gesser, Erez Liebermann, Matt Kelly, Johanna Skrzypczyk, Michelle Huang, Sharon Shaji, and Annabella M. Waszkiewicz

Photos of the authors

Top: Charu A. Chandrasekhar, Avi Gesser, Erez Liebermann, and Matt Kelly
Bottom: Johanna Skrzypczyk, Michelle Huang, Sharon Shaji, and Annabella M. Waszkiewicz
(Photos courtesy of Debevoise & Plimpton LLP)

On March 27, 2024, the U.S. Department of Treasury (“Treasury”) released a report on Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector (the “Report”). The Report was released in response to President Biden’s Executive Order (“EO”) 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, which spearheaded a government-wide effort to issue Artificial Intelligence (“AI”) risk management guidelines consistent with the White House’s AI principles. Continue reading

Mitigating AI Risks for Customer Service Chatbots

by Avi Gesser, Jim PastoreMatt KellyGabriel KohanMelissa Muse and Joshua A. Goland  

photos of authors

Top left to right: Avi Gesser, Jim Pastore, and Matt Kelly. Bottom left to right: Gabriel Kohan, Melissa Muse and Joshua A. Goland (photos courtesy of Debevoise & Plimpton LLP)

Online customer service chatbots have been around for years, allowing companies to triage customer queries with pre-programmed responses that addressed customers’ most common questions. Now, Generative AI (“GenAI”) chatbots have the potential to change the customer service landscape by answering a wider variety of questions, on a broader range of topics, and in a more nuanced and lifelike manner. Proponents of this technology argue companies can achieve better customer satisfaction while reducing costs of human-supported customer service. But the risks of irresponsible adoption of GenAI customer service chatbots, including increased litigation and reputational risk, could eclipse their promise.

We have previously discussed risks associated with adopting GenAI tools, as well as measures companies can implement to mitigate those risks. In this Debevoise Data Blog post, we focus on customer service chatbots and provide some practices that can help companies avoid legal and reputational risk when adopting such tools.

Continue reading

100 Days of Cybersecurity Incident Reporting on Form 8-K: Lessons Learned

by Charu A. Chandrasekhar, Erez Liebermann, Benjamin R. Pedersen, Paul M. Rodel, Matt Kelly, Anna Moody, John Jacob, and Kelly Donoghue

Photos of authors

Top (left to right): Charu A. Chandrasekhar, Erez Liebermann, Benjamin R. Pedersen, and Paul M. Rodel
Bottom (left to right): Matt Kelly, Anna Moody, John Jacob, and Kelly Donoghue (photos of courtesy of Debevoise & Plimpton LLP)

On December 18, 2023, the Securities and Exchange Commission’s (the “SEC”) rule requiring disclosure of material cybersecurity incidents became effective. To date, 11 companies have reported a cybersecurity incident under the new Item 1.05 of Form 8-K (“Item 1.05”).[1]

After the first 100 days of mandatory cybersecurity incident reporting, we examine the early results of the SEC’s new disclosure requirement.

Continue reading

30 Days to Form ADV: Have You Reviewed Your AI Disclosures?

by Charu ChandrasekharAvi GesserKristin SnyderJulie M. RieweMarc PonchioneMatt KellySheena PaulMengyi Xu, and Ned Terrace

Photos authors

Top left to right: Charu Chandrasekhar, Avi Gesser, Kristin Snyder, Julie M. Riewe, and Marc Ponchione.
Bottom left to right: Matt Kelly, Sheena Paul, Mengyi Xu, and Ned Terrace. (Photos courtesy of Debevoise & Plimpton LLP)

Registered investment advisers (“RIAs”) have swiftly embraced AI for investment strategy, market research, portfolio management, trading, risk management, and operations. In response to the exploding use of AI across the securities markets, Chair Gensler of the Securities and Exchange Commission (“SEC”) has declared that he plans to prioritize securities fraud in connection with AI disclosures and warned market participants against “AI washing.” Chair Gensler’s statements reflect the SEC’s sharpening scrutiny of AI usage by registrants. The SEC’s Division of Examinations included AI as one of its 2024 examination priorities, and also launched a widespread AI sweep of RIAs focused on AI in connection with advertising, disclosures, investment decisions, and marketing. The SEC previously charged an RIA in connection with misleading Form ADV Part 2A disclosures regarding the risks associated with its use of an AI-based trading tool.

Continue reading

Risk of AI Abuse by Corporate Insiders Presents Challenges for Compliance Departments

by Avi GesserDouglas ZolkindMatt KellySarah WolfScott J. Woods, and Karen Joo

Photos of authors

Top left to right: Avi Gesser, Douglas Zolkind, and Matt Kelly.
Bottom left to right: Sarah Wolf, Scott J. Woods, and Karen Joo. (Photos courtesy of Debevoise & Plimpton LLP).

We recently highlighted the need for companies to manage risks associated with the adoption of AI technology, including the malicious use of real-time deepfakes (i.e., AI-generated audio or video that impersonates a real person). In this article, we address three AI-related insider risks that warrant special attention by corporate compliance departments (i.e., insider deepfakes, barrier evasion, and model manipulation) and present possible ways to mitigate them.

Continue reading