Tag Archives: Erez Liebermann

Managing Cybersecurity Risks Arising from AI — New Guidance from the NYDFS

by Charu A. Chandrasekhar, Luke Dembosky, Avi Gesser, Erez Liebermann, Marshal Bozzo, Johanna Skrzypczyk, Ned Terrace, and Mengyi Xu.

Photos of the authors

Top left to right: Charu A. Chandrasekhar, Luke Dembosky, Avi Gesser, and Erez Liebermann. 
Bottom left to right: Marshal Bozzo, Johanna Skrzypczyk, Ned Terrace, and Mengyi Xu. (Photos courtesy of Debevoise & Plimpton LLP)

On October 16, 2024, the New York Department of Financial Services (the “NYDFS”) issued an Industry Letter providing guidance on assessing cybersecurity risks associated with the use of AI (the “Guidance”) under the existing 23 NYCRR Part 500 (“Part 500” or “Cybersecurity Regulation”) framework. The Guidance applies to entities that are covered by Part 500 (i.e., entities with a license under the New York Banking Law, Insurance Law or Financial Services Law), but it provides valuable direction to all companies for managing the new cybersecurity risks associated with AI.

The NYDFS makes clear that the Guidance does not impose any new requirements beyond those already contained in the Cybersecurity Regulation. Instead, the Guidance is meant to explain how covered entities should use the Part 500 framework to address cybersecurity risks associated with AI and build controls to mitigate such risks. It also encourages companies to explore the potential cybersecurity benefits from integrating AI into cybersecurity tools (e.g., reviewing security logs and alerts, analyzing behavior, detecting anomalies, and predicting potential security threats). Entities that are covered by Part 500, especially those that have deployed AI in significant ways, should review the Guidance carefully, along with their current cybersecurity policies and controls, to see if any enhancements are appropriate.

Continue reading

SEC Releases New Guidance on Material Cybersecurity Incident Disclosure

by Eric T. JuergensErez LiebermannBenjamin R. Pedersen, Paul M. Rodel, Anna Moody, Kelly Donoghue, and John Jacob

Photos of authors.

Top left to right: Eric T. Juergens, Erez Liebermann, Benjamin R. Pedersen, and Paul M. Rodel. Bottom left to right: Anna Moody, Kelly Donoghue, and John Jacob. (Photos courtesy of Debevoise & Plimpton LLP)

On June 24, 2024, the staff of the Division of Corporation Finance of the Securities and Exchange Commission (the “SEC”) released five new Compliance & Disclosure Interpretations (“C&DIs”) relating to the disclosure of material cybersecurity incidents under Item 1.05 of Form 8-K. A summary of the updates is below, followed by the full text of the new C&DIs.  While the fact patterns underlying the new C&DIs focus on ransomware, issuers should consider the guidance generally in analyzing disclosure obligations for cybersecurity events.

Continue reading

Treasury’s Report on AI (Part 2) – Managing AI-Specific Cybersecurity Risks in the Financial Sector

by Avi Gesser, Erez Liebermann, Matt Kelly, Jackie Dorward, and Joshua A. Goland

Photos of authors.

Top: Avi Gesser, Erez Liebermann, and Matt Kelly. Bottom: Jackie Dorward and Joshua A. Goland (Photos courtesy of Debevoise & Plimpton LLP)

This is the second post in the two-part Debevoise Data Blog series covering the U.S. Treasury Department’s report on Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector (the “Report”).

In Part 1, we addressed the Report’s coverage of the state of AI regulation and best practices recommendations for AI risk management and governance. In Part 2, we review the Report’s assessment of AI-enhanced cybersecurity risks, as well as the risks of attacks against AI systems, and offer guidance on how financial institutions can respond to both types of risks.

Continue reading

Incident Response Plans Are Now Accounting Controls? SEC Brings First-Ever Settled Cybersecurity Internal Controls Charges

by Andrew J. Ceresney, Charu A. Chandrasekhar, Luke Dembosky, Erez Liebermann, Benjamin R. Pedersen, Julie M. Riewe, Matt Kelly, and Anna Moody

Photos of the authors

Top left to right: Andrew J. Ceresney, Charu A. Chandrasekhar, Luke Dembosky and Erez Liebermann. Bottom left to right: Benjamin R. Pedersen, Julie M. Riewe, Matt Kelly and Anna Moody. (Photos courtesy of Debevoise & Plimpton LLP)

In an unprecedented settlement, on June 18, 2024, the U.S. Securities & Exchange Commission (the “SEC”) announced that communications and marketing provider R.R. Donnelley & Sons Co. (“RRD”) agreed to pay approximately $2.1 million to resolve charges arising out of its response to a 2021 ransomware attack. According to the SEC, RRD’s response to the attack revealed deficiencies in its cybersecurity policies and procedures and related disclosure controls. Specifically, in addition to asserting that RRD had failed to gather and review information about the incident for potential disclosure on a timely basis, the SEC alleged that RRD had failed to implement a “system of cybersecurity-related internal accounting controls” to provide reasonable assurances that access to the company’s assets—namely, its information technology systems and networks—was permitted only with management’s authorization. In particular, the SEC alleged that RRD failed to properly instruct the firm responsible for managing its cybersecurity alerts on how to prioritize such alerts, and then failed to act upon the incoming alerts from this firm.

Continue reading

Recently Enacted AI Law in Colorado: Yet Another Reason to Implement an AI Governance Program

by Avi GesserErez Liebermann, Matt KellyMartha HirstAndreas Constantine PavlouCameron Sharp, and Annabella M. Waszkiewicz

Photos of the authors.

Top left to right: Avi Gesser, Erez Liebermann, Matt Kelly, and Martha Hirst. Bottom left to right: Andreas Constantine Pavlou, Cameron Sharp, and Annabella M. Waszkiewicz. (Photos courtesy of Debevoise & Plimpton LLP)

On May 17, 2024, Colorado passed Senate Bill 24-205 (“the Colorado AI Law” or “the Law”), a broad law regulating so-called high-risk AI systems that will become effective on February 1, 2026.  The law imposes sweeping obligations on both AI system deployers and developers doing business in Colorado, including a duty of reasonable care to protect Colorado residents from any known or reasonably foreseeable risks of algorithmic discrimination.

Continue reading

Treasury’s Report on AI (Part 1) – Governance and Risk Management

by Charu A. Chandrasekhar, Avi Gesser, Erez Liebermann, Matt Kelly, Johanna Skrzypczyk, Michelle Huang, Sharon Shaji, and Annabella M. Waszkiewicz

Photos of the authors

Top: Charu A. Chandrasekhar, Avi Gesser, Erez Liebermann, and Matt Kelly
Bottom: Johanna Skrzypczyk, Michelle Huang, Sharon Shaji, and Annabella M. Waszkiewicz
(Photos courtesy of Debevoise & Plimpton LLP)

On March 27, 2024, the U.S. Department of Treasury (“Treasury”) released a report on Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector (the “Report”). The Report was released in response to President Biden’s Executive Order (“EO”) 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, which spearheaded a government-wide effort to issue Artificial Intelligence (“AI”) risk management guidelines consistent with the White House’s AI principles. Continue reading

100 Days of Cybersecurity Incident Reporting on Form 8-K: Lessons Learned

by Charu A. Chandrasekhar, Erez Liebermann, Benjamin R. Pedersen, Paul M. Rodel, Matt Kelly, Anna Moody, John Jacob, and Kelly Donoghue

Photos of authors

Top (left to right): Charu A. Chandrasekhar, Erez Liebermann, Benjamin R. Pedersen, and Paul M. Rodel
Bottom (left to right): Matt Kelly, Anna Moody, John Jacob, and Kelly Donoghue (photos of courtesy of Debevoise & Plimpton LLP)

On December 18, 2023, the Securities and Exchange Commission’s (the “SEC”) rule requiring disclosure of material cybersecurity incidents became effective. To date, 11 companies have reported a cybersecurity incident under the new Item 1.05 of Form 8-K (“Item 1.05”).[1]

After the first 100 days of mandatory cybersecurity incident reporting, we examine the early results of the SEC’s new disclosure requirement.

Continue reading

NIST Releases Most Significant Update to Cybersecurity Framework Since 2014

by Avi Gesser, Erez Liebermann, Michael R. Roberts, HJ Brehmer, and Annabella M. Waszkiewicz

Photos of authors

Left to right: Avi Gesser, Erez Liebermann, Michael R. Roberts, HJ Brehmer, and Annabella M. Waszkiewicz

On February 26, 2024, the National Institute of Standards and Technology (“NIST”) announced the release of Version 2.0 of the Cybersecurity Framework (“Version 2.0” or the “Framework”). We previously wrote about proposed changes to the Framework, which has become an important industry standard for assessing cybersecurity maturity of organizations and managing cybersecurity risk. Version 2.0’s enhanced guidance, and particularly its additional governance section, should be interesting to counsel as a helpful tool for mapping to new legal requirements from regulators such as the Securities and Exchange Commission (“SEC”), New York Department of Financial Services (“NYDFS”), and the Commodity Futures Trading Commission (“CFTC”).

Continue reading

The NYDFS Plans to Impose Significant Obligations on Insurers Using AI or External Data

by Eric DinalloAvi GesserErez LiebermannMarshal BozzoMatt KellyJohanna SkrzypczykCorey GoldsteinSamuel J. AllamanMichelle Huang, and Sharon Shaji

Photos of the authors

Top (from left to right): Eric Dinallo, Avi Gesser, Erez Liebermann, Marshal Bozzo, and Matt Kelly
Bottom (from left to right): Johanna Skrzypczyk, Corey Goldstein, Samuel J. Allaman, Michelle Huang, and Sharon Shaji (Photos courtesy of Debevoise & Plimpton LLP)

On January 17, 2024, the New York State Department of Financial Services (the “NYDFS”) issued a Proposed Insurance Circular Letter regarding the Use of Artificial Intelligence Systems and External Consumer Data and Information Sources in Insurance Underwriting and Pricing (the “Proposed Circular” or “PCL”). The Proposed Circular is the latest regulatory development in artificial intelligence (“AI”) for insurers, following the final adoption of Colorado’s AI Governance and Risk Management Framework Regulation (“CO Governance Regulation”) and the proposed Colorado AI Quantitative Testing Regulation (the “CO Proposed Testing Regulation”), discussed here, and the National Association of Insurance Commissioners’ (“NAIC”) model bulletin on the “Use of Artificial Intelligence Systems by Insurers” (the “NAIC Model Bulletin”), discussed here. In the same way that NYDFS’s Part 500 Cybersecurity Regulation influenced standards for cybersecurity beyond New York State and beyond the financial sector, it is possible that the Proposed Circular will have a significant impact on the AI regulatory landscape.

The PCL builds on the NYDFS’s 2019 Insurance Circular Letter No. 1 (the “2019 Letter”) and includes some clarifying points on the 2019 Letter’s disclosure and transparency obligations. The 2019 Letter was limited to the use of external consumer data and information sources (“ECDIS”) for underwriting life insurance and focused on risks of unlawful discrimination that could result from the use of ECDIS and the need for consumer transparency. The Proposed Circular incorporates the general obligations from the 2019 Letter, adding more detailed requirements, expands the scope beyond life insurance, and adds significant governance and documentation requirements.

Continue reading

Looking Back at Fall 2023 PCCE Events: Conference on Security, Privacy, and Consumer Protection

As we prepare for a full schedule of events in 2024, the NYU School of Law Program on Corporate Compliance and Enforcement (PCCE) is taking a moment to reflect on our busy Fall 2023 program. In this post, we review our November 17, 2023 full day conference on Security, Privacy, and Consumer Protection.

Photo of conference

(©Hollenshead: Courtesy of NYU Photo Bureau)

Continue reading