Tag Archives: Anna R. Gressel

The Top Eight AI Adoption Failures and How to Avoid Them

by Avi Gesser, Matt Kelly, Samuel J. Allaman, Michelle H. Bao, Anna R. Gressel, Michael Pizzi, Lex Gaillard, and Cameron Sharp

Photos of the authors

Top left to right: Avi Gesser, Matt Kelly, Samuel J. Allaman, and Michelle H. Bao.
Bottom left to right: Anna R. Gressel, Michael Pizzi, Lex Gaillard, and Cameron Sharp.
(Photos courtesy of Debevoise & Plimpton LLP)

Over the past three years, we have observed many companies in a wide range of sectors adopt Artificial Intelligence (“AI”) applications for a host of promising use cases. In some instances, however, those efforts have ended up being less valuable than anticipated—and in a few cases, were abandoned altogether—because certain risks associated with adopting AI were not properly considered or addressed before or during implementation. These risks include issues related to cybersecurity, privacy, contracting, intellectual property, data quality, business continuity, disclosure, and fairness.

In this Debevoise Data Blog post, we examine how the manifestation of these risks can lead to AI adoption “failure” and identify ways companies can mitigate these risks to achieve their goals when implementing AI applications.

Continue reading

The Revised Colorado AI Insurance Regulations: What Was Fixed, and What Still May Need Fixing

by Eric Dinallo, Avi Gesser, Matt Kelly, Samuel J. Allaman, Anna R. Gressel, Melissa Muse, and Stephanie D. Thomas

Photos of the authors

From top left to right: Eric Dinallo, Avi Gesser, Matt Kelly, and Samuel J. Allaman.
From bottom left to right: Anna R. Gressel, Melissa Muse, and Stephanie D. Thomas.
(Photos courtesy of Debevoise & Plimpton LLP)

On May 26, 2023, the Colorado Division of Insurance (the “DOI”) released its Revised Draft Algorithm and Predictive Model Governance Regulation (the “Revised Regulation”), amending its initial draft regulation (the “Initial Regulation”), which was released on February 1, 2023. The Revised Regulation imposes requirements on Colorado-licensed life insurance companies that use external consumer data and information sources (“ECDIS”), as well as algorithms and predictive models (“AI models”) that use ECDIS, in insurance practices. The Revised Regulation comes after months of active engagement between the DOI and industry stakeholders. In this Debevoise In Depth, we discuss the Revised Regulation, how it differs from the Initial Regulation, what additional changes should be considered, and how companies can prepare for compliance.

Continue reading

Regulators Should Treat AI Like Employees to Avoid Stifling Innovation

by Avi Gesser, Jehan A. Patterson, Tricia Bozyk Sherno, Frank Colleluori, and Anna R. Gressel

We recently wrote about how rights-based regulatory regimes for artificial intelligence (as opposed to risk-based frameworks) can lead to a misallocation of resources because compliance will require too much effort on low-risk AI (e.g., spam filters, graphics generation for games, inventory management, etc.) and not enough effort on AI that can actually pose a high risk of harm to consumers or the public (e.g., hiring, lending, underwriting, etc.). In this follow-up blog post, we discuss why regulators should view AI risk the same way as employee risk for large companies, and accordingly adopt risk-based regulatory frameworks for AI.

Continue reading

California Restricts Insurers’ Use of AI and Big Data

by Eric R. Dinallo, Avi Gesser, Marshal L. Bozzo, Anna R. Gressel, and Scott M. Caravello

On June 30, 2022, the California Department of Insurance (the “Department”) released Bulletin 2022-5 (the “Bulletin”), which places several limitations on the use of Artificial Intelligence (“AI”) and alternative data sets (“Big Data”) by the insurance industry. The Bulletin states that the Department is aware of recent allegations of racial discrimination in marketing, rating, underwriting and claims practices by insurance companies and reminds all insurance companies of their obligations to conduct their businesses “in a manner that treats all similarly-situated persons alike.”

Continue reading

The Value of AI Incident Response Plans and Tabletop Exercises

by Avi GesserAnna Gressel, Michael R. Roberts, Corey Goldstein, and Erik Rubinstein

Today, it is widely accepted that most large organizations benefit from maintaining a written cybersecurity incident response plan (“CIRP”) to guide their responses to cyberattacks.  For businesses that have invested heavily in artificial intelligence (“AI”), the risks of AI-related incidents and the value of implementing an AI incident response plan (“AIRP”) to help mitigate the impact of AI incidents are often underestimated.

Continue reading

Why Ethical AI Initiatives Need Help from Corporate Compliance

by Avi GesserBruce E. Yannett, Douglas S. ZolkindAnna R. Gressel, and Adele Stichel

Artificial intelligence (AI) is becoming part of the core business operations at many companies. This widespread adoption of AI has led to a proliferation of corporate “ethical AI” principles and programs, as companies seek to ensure that they are using AI fairly and responsibly, and in a manner consistent with the growing expectations of customers, employees, investors, regulators, and the public.

But ethical AI programs at many companies are struggling. Recent reports of AI ethics leaders being fired, resigning, or bringing whistleblower claims illustrate the friction that is common between ethical AI teams and executives who are trying to gain efficiencies and competitive advantages through the adoption of AI.

Continue reading

Model Destruction – The FTC’s Powerful New AI and Privacy Enforcement Tool

by Avi GesserPaul D. Rubin, and Anna R. Gressel

A recent FTC settlement is the latest example of a regulator imposing very significant costs on a company for artificial intelligence (“AI”) or privacy violations by requiring them to destroy algorithms or models. As companies invest millions of dollars in big data and AI projects, and regulators become increasingly concerned about the risks associated with automated decision-making (e.g., privacy, bias, transparency, explainability, etc.), it is important for companies to carefully consider the regulatory risks that are associated with certain data practices. In this Debevoise Data Blog post, we discuss the circumstances in which regulators may require “algorithmic disgorgement” and some best practices for avoiding that outcome.

Continue reading

Cybersecurity and AI Whistleblowers: Unique Challenges and Strategies for Reducing Risk

by Avi Gesser, Anna R. Gressel, Corey Goldstein, and Michael Pizzi

Several recent developments have caused companies to review their whistleblower policies and procedures, especially in the areas of cybersecurity and artificial intelligence (“AI”).

Continue reading

Face Forward: Strategies for Complying with Facial Recognition Laws (Part II of II)

by Jeremy Feigelson, Avi Gesser, Anna Gressel, Andy Gutierrez, and Johanna Skrzypczyk

This is Part 2 in a two-part series of articles about facial recognition laws in the United States. In Part 1, we discussed how current legislation addresses facial recognition. In this part, we assess where the laws seem to be heading and offer some practical risk reduction strategies.

Continue reading

Face Forward: Strategies for Complying with Facial Recognition Laws (Part I of II)

by Jeremy Feigelson, Avi Gesser, Anna Gressel, Andy Gutierrez, and Johanna Skrzypczyk

This is Part I of a two-part post. 

Two huge cross-currents are sweeping the world of facial recognition—and head-on into each other. Companies are eagerly adopting facial recognition tools to better serve their customers, reduce their fraud risks, and manage their workforces. Meanwhile, legislatures and privacy advocates are pushing back hard. They challenge facial recognition as inherently overreaching, invasive of privacy, and prone to error and bias. Legal restrictions of different kinds have been enacted around the country, with more seemingly certain to come.

How will the tension sort itself out between new use cases on the one hand and the push for legal restrictions on the other – and when? And what’s a company to do right now, with facial recognition opportunities presenting themselves today while the law remains a moving target?

This two-part series aims to help. In this Part 1, we lay out the current laws governing facial recognition in the United States. In Part 2, we assess where the law is headed and offer some practical risk-reduction strategies.

Continue reading