Tag Archives: Basil Fawaz

The Final Colorado AI Insurance Regulations: What’s New and How to Prepare

by Avi Gesser, Erez Liebermann, Eric Dinallo, Matt Kelly, Corey Jeremy Goldstein, Stephanie D. Thomas, Samuel J. Allaman, and Basil Fawaz

Photo of authors

Top left to right: Avi Gesser, Erez Liebermann, Eric Dinallo and Matt Kelly
Bottom left to right: Corey Jeremy Goldstein, Stephanie D. Thomas, Samuel J. Allaman and Basil Fawaz
(Photos courtesy of Debevoise & Plimpton LLP)

On September 21, 2023, the Colorado Division of Insurance (the “DOI”) released its Final Governance and Risk Management Framework Requirements for Life Insurers’ Use of External Consumer Data and Information Sources, Algorithms, and Predictive Models (the “Final Regulation”). As discussed below, the Final Regulation (which becomes effective on November 14, 2023) reflects several small changes from the previous version of the regulation that was released on May 26, 2023 (the “Draft Regulation”). A redline reflecting these changes can be found here.

The most substantive change is the requirement that insurers must remediate any detected unfair discrimination. This change is especially significant in light of the DOI’s release of its draft regulation on Quantitative Testing for Unfairly Discriminatory Outcomes for Algorithms and Predictive Models Used for Life Insurance Underwriting (the “Draft Testing Regulation”) on September 28, 2023, which requires insurers to estimate the race and ethnicity of all proposed insureds that have applied for life insurance coverage and then conduct detailed quantitative testing of models that use external consumer data and information sources (“ECDIS”) for potential bias. The Testing Regulation provides that certain results of that prescribed testing methodology will be deemed to be unfairly discriminatory and thereby require the insurer to “immediately take reasonable steps . . . to remediate the unfairly discriminatory outcome . . .”  We will be writing much more about our concerns over the Draft Testing Regulation in the coming weeks.

In this Blog Post, we discuss the Final Regulation, how it differs from the Draft Regulation, and what companies should be doing now to prepare for compliance.

Continue reading

The New York Attorney General Issues Guidance on Data Security Best Practices

by Avi Gesser, Erez Liebermann, Stephanie D. Thomas, and Basil Fawaz

Photos of the authors

Avi Gesser, Erez Liebermann, Stephanie D. Thomas, and Basil Fawaz. (Photos courtesy of Debevoise & Plimpton LLP)

On April 19, 2023, the New York Attorney General (the “NYAG”) published new guidance (the “Guide”) recommending security measures for companies entrusted with consumers’ personal information. The Guide supplements the reasonable safeguards already outlined in the New York Shield Act, which, in part, requires covered entities to maintain reasonable security measures when handling personal information related to New York residents. The Guide reinforces practices that regulators have focused on, such as authentication, encryption, third-party risk management, and data governance. While the Guide’s recommendations are only advisory, it details the NYAG’s Shield Act enforcement actions on the issues, and the Guide is meant to put companies “on notice that they must take their data security obligations seriously.” Following its issuance, the NYAG announced additional Shield Act enforcement actions, including with Practicefirst Medical Management Solutions, that highlighted many of the security concerns highlighted in the Guide.

Continue reading

NYC’s AI Hiring Law Is Now Final and Effective July 5, 2023

by Avi Gesser, Anna Gressel, Jyotin Hamid, Tricia Bozyk Sherno, and Basil Fawaz

Photos of the authors

From left to right: Avi Gesser, Anna Gressel, Jyotin Hamid, and Tricia Bozyk Sherno (Photos courtesy of Debevoise & Plimpton LLP)

The New York City Department of Consumer and Worker Protection (the “DCWP”) has adopted final rules (the “Final Rules”) regulating the use of artificial intelligence for hiring practices. The DCWP’s Automated Employment Decision Tool Law (the “AEDT Law” or the “Law”) requires covered employers to conduct annual independent bias audits and to post public summaries of those results. To recap, the DCWP released an initial set of proposed rules on September 23, 2022, and held a public hearing on November 4, 2022. Due to the high volume of comments expressing concern over the Law’s lack of clarity, the DCWP issued a revised set of proposed rules on December 23, 2022, and held a second public hearing on January 23, 2023. After issuing the Final Rules, the DCWP delayed enforcement of the Law for the second time from April 15, 2023 to July 5, 2023.

Continue reading