Tag Archives: Melissa Muse

Mitigating AI Risks for Customer Service Chatbots

by Avi Gesser, Jim PastoreMatt KellyGabriel KohanMelissa Muse and Joshua A. Goland  

photos of authors

Top left to right: Avi Gesser, Jim Pastore, and Matt Kelly. Bottom left to right: Gabriel Kohan, Melissa Muse and Joshua A. Goland (photos courtesy of Debevoise & Plimpton LLP)

Online customer service chatbots have been around for years, allowing companies to triage customer queries with pre-programmed responses that addressed customers’ most common questions. Now, Generative AI (“GenAI”) chatbots have the potential to change the customer service landscape by answering a wider variety of questions, on a broader range of topics, and in a more nuanced and lifelike manner. Proponents of this technology argue companies can achieve better customer satisfaction while reducing costs of human-supported customer service. But the risks of irresponsible adoption of GenAI customer service chatbots, including increased litigation and reputational risk, could eclipse their promise.

We have previously discussed risks associated with adopting GenAI tools, as well as measures companies can implement to mitigate those risks. In this Debevoise Data Blog post, we focus on customer service chatbots and provide some practices that can help companies avoid legal and reputational risk when adopting such tools.

Continue reading

Resisting Hindsight Bias: A Proposed Framework for CISO Liability

by Andrew J. Ceresney, Charu A. Chandrasekhar, Luke Dembosky, Erez Liebermann, Julie M. Riewe, Anna Moody, Andreas A. Glimenakis, and Melissa Muse

photos of the authors

Top left to right: Andrew J. Ceresney, Charu A. Chandrasekhar, Luke Dembosky, and Erez Liebermann.                    Bottom left to right: Julie M. Riewe, Anna Moody, Andreas A. Glimenakis, and Melissa Muse. (Photos courtesy of Debevoise & Plimpton LLP)

On October 30, 2023, the U.S. Securities and Exchange Commission (“SEC” or “Commission”) charged SolarWinds Corporation’s (“SolarWinds” or the “Company”) chief information security officer (“CISO”) with violations of the anti-fraud provisions of the federal securities laws in connection with alleged disclosure and internal controls violations related both to the Russian cyberattack on the Company discovered in December 2020 and to alleged undisclosed weaknesses in the Company’s cybersecurity program dating back to 2018.[1] This is the first time the SEC has charged a CISO in connection with alleged violations of the federal securities laws occurring within the scope of his or her cybersecurity functions.[2] In doing so, the SEC has raised industry concerns that it intends to—with the benefit of 20/20 hindsight, but without the benefit of core cybersecurity expertise—dissect a CISO’s good-faith judgments in the aftermath of a cybersecurity incident and wield incidents to second guess the design and effectiveness of a company’s entire cybersecurity program (including as it intersects with internal accounting controls designed to identify and prevent errors or inaccuracies in financial reporting) and related disclosures and attempt to hold the CISO liable for any perceived failures.

Continue reading

The EU AI Act – Navigating the EU’s Legislative Labyrinth

by Avi GesserMatt KellyMartha HirstSamuel J. AllamanMelissa Muse, and Samuel Thomson

From left to right: Avi Gesser, Matt Kelly, Martha Hirst, Samuel J. Allaman, and Melissa Muse. Not pictured: Samuel Thomson. (Photos courtesy of Debevoise & Plimpton LLP).

As legislators and regulators around the world are trying to determine how to approach the novel risks and opportunities that AI technologies present, the draft European Union Artificial Intelligence Act (the “EU AI Act” or the “Act”) is a highly anticipated step towards the future of AI regulation. Despite recent challenges in the EU “trilogue negotiations”, proponents still hope to reach a compromise on the key terms by 6th December, with a view to passing the Act in 2024 and most of the provisions becoming effective sometime in 2026.

As one of the few well-progressed AI-specific laws currently in existence, the EU AI Act has generated substantial global attention. Analogous to the influential role played by the EU’s GDPR in shaping the contours of global data privacy laws, the EU AI Act similarly has the potential to influence the worldwide evolution of AI regulation.

This blog post summarizes the complexities of the EU legislative process to explain the current status of, and next steps for, the draft EU AI Act. It also includes steps which businesses may want to start taking now in preparation of incoming AI regulation.

Continue reading

National Association of Attorneys General’s 2023 Consumer Protection Spring Conference

by Courtney M. Dankworth, Avi Gesser, Paul D. Rubin, Jehan A. Patterson, Sam Allaman, and Melissa Muse

Photos of the authors

From top left to right: Courtney M. Dankworth, Avi Gesser, and Paul D. Rubin.
From bottom left to right: Jehan A. Patterson, Sam Allaman, and Melissa Muse.
(Photos courtesy of Debevoise & Plimpton)

On May 10−12, 2023, the National Association of Attorneys General (the “NAAG”) held its Spring 2023 Consumer Protection Conference to discuss the intersection of consumer protection issues and technology. During the portion of the conference that was open to the public, panels featuring federal and state regulators, private legal practitioners, and industry experts discussed potential legal liabilities and consumer risks related to artificial intelligence (“AI”), online lending, and targeted advertising.

In this Debevoise Update, we recap some of the panels and remarks, which emphasized regulators’ increased scrutiny of the intersection of consumer protection and emerging technologies, focusing on the leading themes from the conference: transparency, fairness, and privacy.

Continue reading

The Revised Colorado AI Insurance Regulations: What Was Fixed, and What Still May Need Fixing

by Eric Dinallo, Avi Gesser, Matt Kelly, Samuel J. Allaman, Anna R. Gressel, Melissa Muse, and Stephanie D. Thomas

Photos of the authors

From top left to right: Eric Dinallo, Avi Gesser, Matt Kelly, and Samuel J. Allaman.
From bottom left to right: Anna R. Gressel, Melissa Muse, and Stephanie D. Thomas.
(Photos courtesy of Debevoise & Plimpton LLP)

On May 26, 2023, the Colorado Division of Insurance (the “DOI”) released its Revised Draft Algorithm and Predictive Model Governance Regulation (the “Revised Regulation”), amending its initial draft regulation (the “Initial Regulation”), which was released on February 1, 2023. The Revised Regulation imposes requirements on Colorado-licensed life insurance companies that use external consumer data and information sources (“ECDIS”), as well as algorithms and predictive models (“AI models”) that use ECDIS, in insurance practices. The Revised Regulation comes after months of active engagement between the DOI and industry stakeholders. In this Debevoise In Depth, we discuss the Revised Regulation, how it differs from the Initial Regulation, what additional changes should be considered, and how companies can prepare for compliance.

Continue reading

The Value of Having AI Governance – Lessons from ChatGPT

by Avi Gesser, Suchita Mandavilli Brundage, Samuel J. Allaman, Melissa Muse, and Lex Gaillard

Photos of the authors

From left to right: Avi Gesser, Suchita Mandavilli Brundage, Samuel J. Allaman, Melissa Muse, and Lex Gaillard (photos courtesy of Debevoise & Plimpton LLP)

Last month, we wrote about how many companies were implementing a pilot program for ChatGPT, as a follow up to our article about companies adopting a policy for the work-related uses of generative AI tools like ChatGPT, Bard, and Claude (which we collectively refer to as “Generative AI”). We discussed how a pilot program often involves designating a small group of employees who test potential generative AI use cases, and then make recommendations to a cross-functional AI governance committee that determines (1) which use cases are prohibited and which are permitted, and (2) for the permitted use cases, what restrictions, if any, should apply.

Continue reading

Does Your Company Need a ChatGPT Pilot Program? Probably.

by , , and

Photos of the authors

Top row from left to right: Megan Bannigan, Avi Gesser, Henry Lebowitz, and Benjamin Leb
Bottom row from left to right: Jarrett Lewis, Melissa Muse, Michael R. Roberts, and Lex Gaillard
(Photos courtesy of Debevoise & Plimpton LLP)

Last month, we wrote about how many companies probably need a policy for Generative AI tools like ChatGPT, Bard and Claude (which we collectively refer to as “ChatGPT”). We discussed how employees were using ChatGPT for work (e.g., for fact-checking, first drafts, editing documents, generating ideas and coding) and the various risks of allowing all employees at a company to use ChatGPT without any restrictions (e.g., quality control, contractual, privacy, consumer protection, intellectual property, and vendor management risks). We then provided some suggestions for ways that companies could reduce these risks, including having a ChatGPT policy that organizes ChatGPT use cases into three categories: (1) uses that are prohibited; (2) uses that are permitted with some restrictions, such as labeling, training, and monitoring; and (3) uses that are generally permitted without any restrictions.

Continue reading

Colorado Draft AI Insurance Rules Are a Watershed for AI Governance Regulation

by Eric Dinallo, Avi Gesser, Erez Liebermann, Marshal Bozzo, Anna Gressel, Sam Allaman, Melissa Muse, and Jackie Dorward

Photos of the authors

(Photos courtesy of Debevoise & Plimpton LLP) From top left to right: Eric Dinallo, Avi Gesser, Erez Liebermann, and Marshal Bozzo; From bottom left to right: Anna Gressel, Sam Allaman, and Melissa Muse 

On February 1, 2023, the Colorado Division of Insurance (“DOI”) released its draft Algorithm and Predicative Model Governance Regulation (the “Draft AI Regulation”). The Draft AI Regulation imposes requirements on Colorado-licensed life insurance companies that use external data and AI systems in insurance practices. This release follows months of highly active engagement between the DOI and industry stakeholders, resulting in a first-in-the-nation set of AI and Big Data governance rules that will influence state, federal and international AI regulations for many years to come.

Continue reading

Does Your Company Need a ChatGPT Policy? Probably.

by Megan Bannigan, Avi Gesser, Henry Lebowitz, Anna Gressel, Michael R. Roberts, Melissa Muse, Benjamin Leb, Jarrett Lewis, Lex Gaillard, and ChatGPT

Photos of the authors

Top row left to right: Megan Bannigan, Avi Gesser, Henry Lebowitz, and Anna Gressel
Bottom row left to right: Michael R. Roberts, Melissa Muse, Benjamin Leb, and Jarrett Lewis

ChatGPT is an AI language model developed by OpenAI that was released to the public in November 2022 and already has millions of users. While most people were initially using the publicly available version of ChatGPT for personal tasks (e.g., generating recipes, poems, workout routines, etc.) many have started to use it for work-related projects. In this Debevoise Data Blog post, we discuss how people are using ChatGPT at their jobs, what are the associated risks, and what policies companies should consider implementing to reduce those risks.

Continue reading