Tag Archives: Corey Goldstein

The NYDFS Plans to Impose Significant Obligations on Insurers Using AI or External Data

by Eric DinalloAvi GesserErez LiebermannMarshal BozzoMatt KellyJohanna SkrzypczykCorey GoldsteinSamuel J. AllamanMichelle Huang, and Sharon Shaji

Photos of the authors

Top (from left to right): Eric Dinallo, Avi Gesser, Erez Liebermann, Marshal Bozzo, and Matt Kelly
Bottom (from left to right): Johanna Skrzypczyk, Corey Goldstein, Samuel J. Allaman, Michelle Huang, and Sharon Shaji (Photos courtesy of Debevoise & Plimpton LLP)

On January 17, 2024, the New York State Department of Financial Services (the “NYDFS”) issued a Proposed Insurance Circular Letter regarding the Use of Artificial Intelligence Systems and External Consumer Data and Information Sources in Insurance Underwriting and Pricing (the “Proposed Circular” or “PCL”). The Proposed Circular is the latest regulatory development in artificial intelligence (“AI”) for insurers, following the final adoption of Colorado’s AI Governance and Risk Management Framework Regulation (“CO Governance Regulation”) and the proposed Colorado AI Quantitative Testing Regulation (the “CO Proposed Testing Regulation”), discussed here, and the National Association of Insurance Commissioners’ (“NAIC”) model bulletin on the “Use of Artificial Intelligence Systems by Insurers” (the “NAIC Model Bulletin”), discussed here. In the same way that NYDFS’s Part 500 Cybersecurity Regulation influenced standards for cybersecurity beyond New York State and beyond the financial sector, it is possible that the Proposed Circular will have a significant impact on the AI regulatory landscape.

The PCL builds on the NYDFS’s 2019 Insurance Circular Letter No. 1 (the “2019 Letter”) and includes some clarifying points on the 2019 Letter’s disclosure and transparency obligations. The 2019 Letter was limited to the use of external consumer data and information sources (“ECDIS”) for underwriting life insurance and focused on risks of unlawful discrimination that could result from the use of ECDIS and the need for consumer transparency. The Proposed Circular incorporates the general obligations from the 2019 Letter, adding more detailed requirements, expands the scope beyond life insurance, and adds significant governance and documentation requirements.

Continue reading

The Final Colorado AI Insurance Regulations: What’s New and How to Prepare

by Avi Gesser, Erez Liebermann, Eric Dinallo, Matt Kelly, Corey Jeremy Goldstein, Stephanie D. Thomas, Samuel J. Allaman, and Basil Fawaz

Photo of authors

Top left to right: Avi Gesser, Erez Liebermann, Eric Dinallo and Matt Kelly
Bottom left to right: Corey Jeremy Goldstein, Stephanie D. Thomas, Samuel J. Allaman and Basil Fawaz
(Photos courtesy of Debevoise & Plimpton LLP)

On September 21, 2023, the Colorado Division of Insurance (the “DOI”) released its Final Governance and Risk Management Framework Requirements for Life Insurers’ Use of External Consumer Data and Information Sources, Algorithms, and Predictive Models (the “Final Regulation”). As discussed below, the Final Regulation (which becomes effective on November 14, 2023) reflects several small changes from the previous version of the regulation that was released on May 26, 2023 (the “Draft Regulation”). A redline reflecting these changes can be found here.

The most substantive change is the requirement that insurers must remediate any detected unfair discrimination. This change is especially significant in light of the DOI’s release of its draft regulation on Quantitative Testing for Unfairly Discriminatory Outcomes for Algorithms and Predictive Models Used for Life Insurance Underwriting (the “Draft Testing Regulation”) on September 28, 2023, which requires insurers to estimate the race and ethnicity of all proposed insureds that have applied for life insurance coverage and then conduct detailed quantitative testing of models that use external consumer data and information sources (“ECDIS”) for potential bias. The Testing Regulation provides that certain results of that prescribed testing methodology will be deemed to be unfairly discriminatory and thereby require the insurer to “immediately take reasonable steps . . . to remediate the unfairly discriminatory outcome . . .”  We will be writing much more about our concerns over the Draft Testing Regulation in the coming weeks.

In this Blog Post, we discuss the Final Regulation, how it differs from the Draft Regulation, and what companies should be doing now to prepare for compliance.

Continue reading

Overview of Global AI Regulatory Developments and Some Tips to Reduce Risk

by Avi Gesser, Matt Kelly, Anna Gressel, Corey Goldstein, Samuel Allaman, Michael Pizzi, Jackie Dorward, Lex Gaillard, and Ned Terrace

Photos of the authors

Top row from left to right: Avi Gesser, Matt Kelly, Anna Gressel, Corey Goldstein, and Samuel Allaman
Bottom row from left to right: Michael Pizzi, Jackie Dorward, Lex Gaillard, and Ned Terrace (photos courtesy of Debevoise & Plimpton LLP)

With last week’s political deal in European Parliament to advance the European Union’s groundbreaking AI Act (the “EU AI Act”), Europe is one step closer to enacting the world’s first comprehensive AI regulatory framework. Yet while the EU is poised to become the first jurisdiction to take this step, other countries are not far behind. In recent months, the U.S., Canada, Brazil, and China have all introduced measures that illustrate their respective goals and approaches to regulating AI, with the AI regimes in Canada and Brazil appearing to be modeled substantially on the EU AI Act.

In this blog post, we provide an overview of these legislative developments, highlighting key similarities, differences and trends between each country’s approach as well as providing a few considerations for companies deploying significant AI systems.

Continue reading

The Value of AI Incident Response Plans and Tabletop Exercises

by Avi GesserAnna Gressel, Michael R. Roberts, Corey Goldstein, and Erik Rubinstein

Today, it is widely accepted that most large organizations benefit from maintaining a written cybersecurity incident response plan (“CIRP”) to guide their responses to cyberattacks.  For businesses that have invested heavily in artificial intelligence (“AI”), the risks of AI-related incidents and the value of implementing an AI incident response plan (“AIRP”) to help mitigate the impact of AI incidents are often underestimated.

Continue reading

The FTC’s Strengthened Safeguards Rule and the Evolving Landscape of Reasonable Data Security

by Jeremy Feigelson, Avi Gesser, Satish Kini, Johanna Skrzypczyk, Lily D. Vo, Corey Goldstein, and Scott M. Caravello

On October 27, 2021, the Federal Trade Commission (the “FTC”) announced significant updates to the Standards for Safeguarding Customer Information (PDF: 835 KB) (the “Safeguards Rule” or “Amended Rule”).  This rule, promulgated pursuant to the Gramm-Leach-Bliley Act, is designed to protect the consumer data collected by non-bank financial institutions, such as mortgage lenders and brokers, “pay day” lenders, and automobile dealerships, among many others (“subject financial institutions”).  The Amended Rule is likely to have a far-reaching ripple effect and inform the meaning of reasonable data security requirements industry-wide.  In this blog post, we highlight the Amended Rule’s more novel requirements and provide an overview of the potential impacts. 

Continue reading

Cybersecurity and AI Whistleblowers: Unique Challenges and Strategies for Reducing Risk

by Avi Gesser, Anna R. Gressel, Corey Goldstein, and Michael Pizzi

Several recent developments have caused companies to review their whistleblower policies and procedures, especially in the areas of cybersecurity and artificial intelligence (“AI”).

Continue reading

SEC Levies $1 Million Penalty for Allegedly Misleading Cybersecurity Incident Disclosures

by Jeremy Feigelson, Avi GesserPaul Rodel, Joshua Samit, Charu Chandrasekhar, and Corey Goldstein 

The U.S. Securities and Exchange Commission this week took the rare step of penalizing a company for its allegedly poor disclosure of a cyber incident. The SEC announced a $1 million civil penalty against Pearson plc (“Pearson”), a London-based educational publishing company that is a U.S. securities issuer. The penalty resolves charges that Pearson misled investors related to a 2018 data breach. Continue reading

Court Chips Away at Privilege Protections for Cyber Forensic Reports

by Jim Pastore, Luke Dembosky, Jeremy Feigelson, Avi Gesser, Corey Goldstein, and Mengyi Xu

On January 12, Judge James Boasberg of the U.S. District Court for the District of Columbia granted plaintiff Guo Wengui’s motion to compel production of a report (the “Report”) —and related materials—prepared by forensic vendor Duff & Phelps in Guo’s lawsuit against the law firm that formerly represented him, Clark Hill, PLC (the “Firm”). See Wengui v. Clark Hill, PLC, No. 19-cv-3195 (JEB), 2021 WL 106417 (D.D.C. Jan. 12, 2021). The court rejected claims the Report was protected by the work-product doctrine and attorney-client privilege.

Continue reading