Tag Archives: Michelle Huang

Treasury’s Report on AI (Part 1) – Governance and Risk Management

by Charu A. Chandrasekhar, Avi Gesser, Erez Liebermann, Matt Kelly, Johanna Skrzypczyk, Michelle Huang, Sharon Shaji, and Annabella M. Waszkiewicz

Photos of the authors

Top: Charu A. Chandrasekhar, Avi Gesser, Erez Liebermann, and Matt Kelly
Bottom: Johanna Skrzypczyk, Michelle Huang, Sharon Shaji, and Annabella M. Waszkiewicz
(Photos courtesy of Debevoise & Plimpton LLP)

On March 27, 2024, the U.S. Department of Treasury (“Treasury”) released a report on Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector (the “Report”). The Report was released in response to President Biden’s Executive Order (“EO”) 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, which spearheaded a government-wide effort to issue Artificial Intelligence (“AI”) risk management guidelines consistent with the White House’s AI principles. Continue reading

Preparing for AI Whistleblowers

by Charu A. Chandrasekhar, Avi Gesser, Arian M. June, Michelle Huang, Cooper Yoo, and Sharon Shaji

Photos of the authors

Top row: Charu A. Chandrasekhar, Avi Gesser, and Arian M. June
Bottom row: Michelle Huang, Cooper Yoo, and Sharon Shaji
(Photos courtesy of Debevoise & Plimpton LLP)

As artificial intelligence (“AI”) use and capabilities surge, a new risk is emerging for companies: AI whistleblowers. Both increased regulatory scrutiny over AI use and record-breaking whistleblower activity has set the stage for an escalation of AI whistleblower-related enforcement. As we’ve previously written and spoken about, the risk of AI whistleblowers is rising as whistleblower protections and awards expand, internal company disputes over cybersecurity and AI increase due to a lack of clear regulatory guidance, and public skepticism mounts over the ability of companies to offer consumer protections against cybersecurity and AI risks.

Continue reading

The NYDFS Plans to Impose Significant Obligations on Insurers Using AI or External Data

by Eric DinalloAvi GesserErez LiebermannMarshal BozzoMatt KellyJohanna SkrzypczykCorey GoldsteinSamuel J. AllamanMichelle Huang, and Sharon Shaji

Photos of the authors

Top (from left to right): Eric Dinallo, Avi Gesser, Erez Liebermann, Marshal Bozzo, and Matt Kelly
Bottom (from left to right): Johanna Skrzypczyk, Corey Goldstein, Samuel J. Allaman, Michelle Huang, and Sharon Shaji (Photos courtesy of Debevoise & Plimpton LLP)

On January 17, 2024, the New York State Department of Financial Services (the “NYDFS”) issued a Proposed Insurance Circular Letter regarding the Use of Artificial Intelligence Systems and External Consumer Data and Information Sources in Insurance Underwriting and Pricing (the “Proposed Circular” or “PCL”). The Proposed Circular is the latest regulatory development in artificial intelligence (“AI”) for insurers, following the final adoption of Colorado’s AI Governance and Risk Management Framework Regulation (“CO Governance Regulation”) and the proposed Colorado AI Quantitative Testing Regulation (the “CO Proposed Testing Regulation”), discussed here, and the National Association of Insurance Commissioners’ (“NAIC”) model bulletin on the “Use of Artificial Intelligence Systems by Insurers” (the “NAIC Model Bulletin”), discussed here. In the same way that NYDFS’s Part 500 Cybersecurity Regulation influenced standards for cybersecurity beyond New York State and beyond the financial sector, it is possible that the Proposed Circular will have a significant impact on the AI regulatory landscape.

The PCL builds on the NYDFS’s 2019 Insurance Circular Letter No. 1 (the “2019 Letter”) and includes some clarifying points on the 2019 Letter’s disclosure and transparency obligations. The 2019 Letter was limited to the use of external consumer data and information sources (“ECDIS”) for underwriting life insurance and focused on risks of unlawful discrimination that could result from the use of ECDIS and the need for consumer transparency. The Proposed Circular incorporates the general obligations from the 2019 Letter, adding more detailed requirements, expands the scope beyond life insurance, and adds significant governance and documentation requirements.

Continue reading

NYDFS Proposes Significant Changes to Its Cybersecurity Rules

by Luke Dembosky, Avi Gesser, Erez Liebermann, Jim Pastore, Charu A. Chandrasekhar, H. Jacqueline Brehmer, Michelle Huang, and Mengyi Xu.

On July 29, 2022, the New York Department of Financial Services (“NYDFS”) released Draft Amendments to its Part 500 Cybersecurity Rules, which include a mandatory 24‑hour notification for cyber ransom payments, annual independent cybersecurity audits for larger entities, increased expectations for board expertise, and tough new restrictions on privileged accounts. There will be a very short 10-day pre-proposal comments period (ending August 8, 2022), followed by the publishing of the official proposed amendments in the coming weeks, which will start a 60-day comment period.
Continue reading

Time to Update Cyber Incident Response Plans, Especially for Banks Subject to the New 36-Hour Breach Notification Rule

by Luke Dembosky, Avi GesserJohanna SkrzypczykMichael R. RobertsAndy Gutierrezand Michelle Huang

As cyberattacks continue to plague U.S. companies, cybersecurity remains a core risk, even for businesses that have invested heavily in technical measures to protect their systems.  As a result, cybersecurity best practices have evolved to include not only preventative measures, but also robust preparations for responding to cyber incidents, so that companies can improve their resilience, decrease the time it takes to detect and effectively respond to an attack, and reduce the overall damage.  Because nearly every company will at some point face a successful attack, regulators, insurers, auditors, and investors view an incident response plan (“IRP”) as a key element of a reasonable cybersecurity program.

Part of the value of an IRP comes from the process of drafting it, which involves making decisions about how an incident will be handled (e.g., who should be drafting communications to impacted employees, who has the authority to shut down parts of the network, which incidents will be escalated to senior management, etc.).  Determining these issues over the course of several weeks while drafting the IRP and consulting with the relevant individuals is much better than working through them for the first time under the stress and time constraints of an actual incident.  Well-drafted IRPs also provide checklists of things to do when an incident occurs (e.g., preserve evidence, contact the FBI, notify the insurer, draft a public statement, determine a point-of-contact for external inquiries, etc.).

Continue reading