Tag Archives: Charu A. Chandrasekhar

Managing Cybersecurity Risks Arising from AI — New Guidance from the NYDFS

by Charu A. Chandrasekhar, Luke Dembosky, Avi Gesser, Erez Liebermann, Marshal Bozzo, Johanna Skrzypczyk, Ned Terrace, and Mengyi Xu.

Photos of the authors

Top left to right: Charu A. Chandrasekhar, Luke Dembosky, Avi Gesser, and Erez Liebermann. 
Bottom left to right: Marshal Bozzo, Johanna Skrzypczyk, Ned Terrace, and Mengyi Xu. (Photos courtesy of Debevoise & Plimpton LLP)

On October 16, 2024, the New York Department of Financial Services (the “NYDFS”) issued an Industry Letter providing guidance on assessing cybersecurity risks associated with the use of AI (the “Guidance”) under the existing 23 NYCRR Part 500 (“Part 500” or “Cybersecurity Regulation”) framework. The Guidance applies to entities that are covered by Part 500 (i.e., entities with a license under the New York Banking Law, Insurance Law or Financial Services Law), but it provides valuable direction to all companies for managing the new cybersecurity risks associated with AI.

The NYDFS makes clear that the Guidance does not impose any new requirements beyond those already contained in the Cybersecurity Regulation. Instead, the Guidance is meant to explain how covered entities should use the Part 500 framework to address cybersecurity risks associated with AI and build controls to mitigate such risks. It also encourages companies to explore the potential cybersecurity benefits from integrating AI into cybersecurity tools (e.g., reviewing security logs and alerts, analyzing behavior, detecting anomalies, and predicting potential security threats). Entities that are covered by Part 500, especially those that have deployed AI in significant ways, should review the Guidance carefully, along with their current cybersecurity policies and controls, to see if any enhancements are appropriate.

Continue reading

Supreme Court Punches SEC APs Right in the Seventh Amendment

by Andrew J. Ceresney, Charu A. Chandrasekhar, Arian M. June, Robert B. Kaplan, Julie M. Riewe, Kristin A. Snyder, and Jonathan R. Tuttle

Photos of the authors

Top left to right: Andrew J. Ceresney, Charu A. Chandrasekhar, Arian M. June, and Robert B. Kaplan. Bottom left to right: Julie M. Riewe, Kristin A. Snyder, and Jonathan R. Tuttle. (Photos courtesy of Debevoise & Plimpton LLP)

Recently, in a long-awaited ruling with significant implications for the securities industry and administrative agencies more generally, the U.S. Supreme Court affirmed the Fifth Circuit’s decision in Jarkesy v. SEC, holding that the Seventh Amendment right to a jury trial precluded the U.S. Securities and Exchange Commission (the “SEC”) from pursuing monetary penalties for securities fraud violations through in-house administrative adjudications. The key takeaways are:

  • The Court’s ruling was limited to securities fraud claims, but other SEC claims seeking legal remedies may be impacted, as well as claims by other federal agencies that may have been adjudicated in-house previously.
  • We expect that the SEC will continue its practice of bringing new enforcement actions in district court, except when a claim only is available in the administrative forum.
  • Because of the majority decision’s focus on fraud’s common-law roots, the decision raises questions about whether the SEC may bring negligence-based or strict liability claims seeking penalties administratively.
  • The Court did not resolve other constitutional questions concerning the SEC’s administrative law judges, including whether the SEC’s use of administrative proceedings violates the non-delegation doctrine and whether the SEC’s administrative law judges are unconstitutionally protected from removal in violation of Article III.
  • We anticipate additional litigation regarding these unresolved issues.

Continue reading

Incident Response Plans Are Now Accounting Controls? SEC Brings First-Ever Settled Cybersecurity Internal Controls Charges

by Andrew J. Ceresney, Charu A. Chandrasekhar, Luke Dembosky, Erez Liebermann, Benjamin R. Pedersen, Julie M. Riewe, Matt Kelly, and Anna Moody

Photos of the authors

Top left to right: Andrew J. Ceresney, Charu A. Chandrasekhar, Luke Dembosky and Erez Liebermann. Bottom left to right: Benjamin R. Pedersen, Julie M. Riewe, Matt Kelly and Anna Moody. (Photos courtesy of Debevoise & Plimpton LLP)

In an unprecedented settlement, on June 18, 2024, the U.S. Securities & Exchange Commission (the “SEC”) announced that communications and marketing provider R.R. Donnelley & Sons Co. (“RRD”) agreed to pay approximately $2.1 million to resolve charges arising out of its response to a 2021 ransomware attack. According to the SEC, RRD’s response to the attack revealed deficiencies in its cybersecurity policies and procedures and related disclosure controls. Specifically, in addition to asserting that RRD had failed to gather and review information about the incident for potential disclosure on a timely basis, the SEC alleged that RRD had failed to implement a “system of cybersecurity-related internal accounting controls” to provide reasonable assurances that access to the company’s assets—namely, its information technology systems and networks—was permitted only with management’s authorization. In particular, the SEC alleged that RRD failed to properly instruct the firm responsible for managing its cybersecurity alerts on how to prioritize such alerts, and then failed to act upon the incoming alerts from this firm.

Continue reading

Treasury’s Report on AI (Part 1) – Governance and Risk Management

by Charu A. Chandrasekhar, Avi Gesser, Erez Liebermann, Matt Kelly, Johanna Skrzypczyk, Michelle Huang, Sharon Shaji, and Annabella M. Waszkiewicz

Photos of the authors

Top: Charu A. Chandrasekhar, Avi Gesser, Erez Liebermann, and Matt Kelly
Bottom: Johanna Skrzypczyk, Michelle Huang, Sharon Shaji, and Annabella M. Waszkiewicz
(Photos courtesy of Debevoise & Plimpton LLP)

On March 27, 2024, the U.S. Department of Treasury (“Treasury”) released a report on Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector (the “Report”). The Report was released in response to President Biden’s Executive Order (“EO”) 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, which spearheaded a government-wide effort to issue Artificial Intelligence (“AI”) risk management guidelines consistent with the White House’s AI principles. Continue reading

Preparing for AI Whistleblowers

by Charu A. Chandrasekhar, Avi Gesser, Arian M. June, Michelle Huang, Cooper Yoo, and Sharon Shaji

Photos of the authors

Top row: Charu A. Chandrasekhar, Avi Gesser, and Arian M. June
Bottom row: Michelle Huang, Cooper Yoo, and Sharon Shaji
(Photos courtesy of Debevoise & Plimpton LLP)

As artificial intelligence (“AI”) use and capabilities surge, a new risk is emerging for companies: AI whistleblowers. Both increased regulatory scrutiny over AI use and record-breaking whistleblower activity has set the stage for an escalation of AI whistleblower-related enforcement. As we’ve previously written and spoken about, the risk of AI whistleblowers is rising as whistleblower protections and awards expand, internal company disputes over cybersecurity and AI increase due to a lack of clear regulatory guidance, and public skepticism mounts over the ability of companies to offer consumer protections against cybersecurity and AI risks.

Continue reading

100 Days of Cybersecurity Incident Reporting on Form 8-K: Lessons Learned

by Charu A. Chandrasekhar, Erez Liebermann, Benjamin R. Pedersen, Paul M. Rodel, Matt Kelly, Anna Moody, John Jacob, and Kelly Donoghue

Photos of authors

Top (left to right): Charu A. Chandrasekhar, Erez Liebermann, Benjamin R. Pedersen, and Paul M. Rodel
Bottom (left to right): Matt Kelly, Anna Moody, John Jacob, and Kelly Donoghue (photos of courtesy of Debevoise & Plimpton LLP)

On December 18, 2023, the Securities and Exchange Commission’s (the “SEC”) rule requiring disclosure of material cybersecurity incidents became effective. To date, 11 companies have reported a cybersecurity incident under the new Item 1.05 of Form 8-K (“Item 1.05”).[1]

After the first 100 days of mandatory cybersecurity incident reporting, we examine the early results of the SEC’s new disclosure requirement.

Continue reading

AI Enforcement Starts with Washing: The SEC Charges its First AI Fraud Cases

by Andrew J. Ceresney, Charu A. Chandrasekhar, Avi Gesser, Arian M. June, Robert B. Kaplan, Julie M. Riewe, Jeff Robins, and Kristin A. Snyder

Photos of authors

Top (left to right): Andrew J. Ceresney, Charu A. Chandrasekhar, Avi Gesser, and Arian M. June
Bottom (left to right): Robert B. Kaplan, Julie M. Riewe, Jeff Robins, and Kristin A. Snyder (photos courtesy of Debevoise & Plimpton LLP)

On March 18, 2024, the U.S. Securities and Exchange Commission (“SEC”) announced settled charges against two investment advisers, Delphia (USA) Inc. (“Delphia”) and Global Predictions Inc. (“Global Predictions”) for making false and misleading statements about their alleged use of artificial intelligence (“AI”) in connection with providing investment advice. These settlements are the SEC’s first-ever cases charging violations of the antifraud provisions of the federal securities laws in connection with AI disclosures, and also include the first settled charges involving AI in connection with the Marketing and Compliance Rules under the Investment Advisers Act of 1940 (“Advisers Act”). The matters reflect Chair Gensler’s determination to target “AI washing”—securities fraud in connection with AI disclosures under existing provisions of the federal securities laws—and underscore that public companies, investment advisers and broker-dealers will face rapidly increasing scrutiny from the SEC in connection with their AI disclosures, policies and procedures. We have previously discussed Chair Gensler’s scrutiny of AI washing and AI disclosure risk in Form ADV Part 2A filings. In this client alert, we discuss the charges and AI disclosure and compliance takeaways.

Continue reading

Resisting Hindsight Bias: A Proposed Framework for CISO Liability

by Andrew J. Ceresney, Charu A. Chandrasekhar, Luke Dembosky, Erez Liebermann, Julie M. Riewe, Anna Moody, Andreas A. Glimenakis, and Melissa Muse

photos of the authors

Top left to right: Andrew J. Ceresney, Charu A. Chandrasekhar, Luke Dembosky, and Erez Liebermann.                    Bottom left to right: Julie M. Riewe, Anna Moody, Andreas A. Glimenakis, and Melissa Muse. (Photos courtesy of Debevoise & Plimpton LLP)

On October 30, 2023, the U.S. Securities and Exchange Commission (“SEC” or “Commission”) charged SolarWinds Corporation’s (“SolarWinds” or the “Company”) chief information security officer (“CISO”) with violations of the anti-fraud provisions of the federal securities laws in connection with alleged disclosure and internal controls violations related both to the Russian cyberattack on the Company discovered in December 2020 and to alleged undisclosed weaknesses in the Company’s cybersecurity program dating back to 2018.[1] This is the first time the SEC has charged a CISO in connection with alleged violations of the federal securities laws occurring within the scope of his or her cybersecurity functions.[2] In doing so, the SEC has raised industry concerns that it intends to—with the benefit of 20/20 hindsight, but without the benefit of core cybersecurity expertise—dissect a CISO’s good-faith judgments in the aftermath of a cybersecurity incident and wield incidents to second guess the design and effectiveness of a company’s entire cybersecurity program (including as it intersects with internal accounting controls designed to identify and prevent errors or inaccuracies in financial reporting) and related disclosures and attempt to hold the CISO liable for any perceived failures.

Continue reading

Hackers Turned Whistleblowers: SEC Cybersecurity Rules Weaponized Over Ransom Threat

by Andrew J. Ceresney, Charu A. Chandrasekhar, Luke Dembosky, Avi Gesser, Matthew E. Kaplan, Erez Liebermann, Benjamin R. Pedersen, Steven J. Slutzky, Jonathan R. Tuttle, Matt Kelly, and Kelly Donoghue

Top left to right: Andrew J. Ceresney, Charu A. Chandrasekhar, Luke Dembosky, Avi Gesser, Matthew E. Kaplan, and Erez Liebermann
Bottom left to right: Benjamin R. Pedersen, Steven J. Slutzky, Jonathan R. Tuttle, Matt Kelly, and Kelly Donoghue (Photos courtesy of Debevoise & Plimpton LLP)

On November 7, 2023, the profilic ransomware group AlphV (a/k/a “BlackCat”) reportedly breached software company MeridianLink’s information systems, exfiltrated data and demanded payment in exchange for not publicly releasing the stolen data. While this type of cybersecurity incident has become increasingly common, the threat actor’s next move was less predictable. AlphV filed a whistleblower tip with the U.S. Securities and Exchange Commission (the “SEC”) against its victim for failing to publicly disclose the cybersecurity incident. AlphV wrote in its complaint[1]:

We want to bring to your attention a concerning issue regarding MeridianLink’s compliance with the recently adopted cybersecurity incident disclosure rules. It has come to our attention that MeridianLink, in light of a significant breach compromising customer data and operational information, has failed to file the requisite disclosure under Item 1.05 of Form 8-K within the stipulated four business days, as mandated by the new SEC rules.

As we have previously reported, the SEC adopted final rules mandating disclosure of cybersecurity risk, strategy and governance, as well as material cybersecurity incidents. This includes new Item 1.05 of Form 8-K, which, beginning December 18,­ will require registrants to disclose certain information about a material cybersecurity incident within four business days of determining that a cybersecurity incident it has experienced is material. Though AlphV jumped the gun on the applicability of new Item 1.05, its familiarity with, and exploitation of their target’s public disclosure obligations is a further escalation in a steadily increasing trend of pressure tactics by leading ransom groups.

Continue reading

SEC Proposes Rule to Eliminate or Neutralize Conflicts in the Use of “Predictive Data Analytics” Technologies

by Andrew J. Ceresney, Charu A. Chandrasekhar, Avi Gesser, Jeff Robins, Matt Kelly, Gary E. Murphy, Jarrett Lewis, Robert B. Kaplan, Marc Ponchione, Sheena Paul, Catherine Morrison, Julie M. Riewe, Kristin A. Snyder, and Mengyi Xu

Photos of the authors

Top left to right: Andrew J. Ceresney, Charu A. Chandrasekhar, Avi Gesser, Jeff Robins, Matt Kelly, Gary E. Murphy, and Jarrett Lewis.
Bottom left to right: Robert B. Kaplan, Marc Ponchione, Sheena Paul, Catherine Morrison, Julie M. Riewe, Kristin A. Snyder, and Mengyi Xu.
(Photos courtesy of Debevoise & Plimpton LLP)

On July 26, 2023, the U.S. Securities and Exchange Commission (“SEC”) issued proposed rules (the “Proposed Rules”) that would require broker-dealers and investment advisers (collectively, “firms”) to evaluate their use of predictive data analytics (“PDA”) and other covered technologies in connection with investor interactions and to eliminate or neutralize certain conflicts of interest associated with such use. The Proposed Rules also contain amendments to rules under the Securities Exchange Act of 1934[1] (“Exchange Act”) and the Investment Advisers Act of 1940[2] (“Advisers Act”) that would require firms to have policies and procedures to achieve compliance with the rules and to make and maintain related records.

In this memorandum, we first discuss the scope of the Proposed Rules and provide a summary of key provisions. We also discuss some key implications regarding the scope and application of the rules if adopted as proposed. The full text of the proposal is available here.

Continue reading