Tag Archives: Lex Gaillard

The Top Eight AI Adoption Failures and How to Avoid Them

by Avi Gesser, Matt Kelly, Samuel J. Allaman, Michelle H. Bao, Anna R. Gressel, Michael Pizzi, Lex Gaillard, and Cameron Sharp

Photos of the authors

Top left to right: Avi Gesser, Matt Kelly, Samuel J. Allaman, and Michelle H. Bao.
Bottom left to right: Anna R. Gressel, Michael Pizzi, Lex Gaillard, and Cameron Sharp.
(Photos courtesy of Debevoise & Plimpton LLP)

Over the past three years, we have observed many companies in a wide range of sectors adopt Artificial Intelligence (“AI”) applications for a host of promising use cases. In some instances, however, those efforts have ended up being less valuable than anticipated—and in a few cases, were abandoned altogether—because certain risks associated with adopting AI were not properly considered or addressed before or during implementation. These risks include issues related to cybersecurity, privacy, contracting, intellectual property, data quality, business continuity, disclosure, and fairness.

In this Debevoise Data Blog post, we examine how the manifestation of these risks can lead to AI adoption “failure” and identify ways companies can mitigate these risks to achieve their goals when implementing AI applications.

Continue reading

Overview of Global AI Regulatory Developments and Some Tips to Reduce Risk

by Avi Gesser, Matt Kelly, Anna Gressel, Corey Goldstein, Samuel Allaman, Michael Pizzi, Jackie Dorward, Lex Gaillard, and Ned Terrace

Photos of the authors

Top row from left to right: Avi Gesser, Matt Kelly, Anna Gressel, Corey Goldstein, and Samuel Allaman
Bottom row from left to right: Michael Pizzi, Jackie Dorward, Lex Gaillard, and Ned Terrace (photos courtesy of Debevoise & Plimpton LLP)

With last week’s political deal in European Parliament to advance the European Union’s groundbreaking AI Act (the “EU AI Act”), Europe is one step closer to enacting the world’s first comprehensive AI regulatory framework. Yet while the EU is poised to become the first jurisdiction to take this step, other countries are not far behind. In recent months, the U.S., Canada, Brazil, and China have all introduced measures that illustrate their respective goals and approaches to regulating AI, with the AI regimes in Canada and Brazil appearing to be modeled substantially on the EU AI Act.

In this blog post, we provide an overview of these legislative developments, highlighting key similarities, differences and trends between each country’s approach as well as providing a few considerations for companies deploying significant AI systems.

Continue reading

The Value of Having AI Governance – Lessons from ChatGPT

by Avi Gesser, Suchita Mandavilli Brundage, Samuel J. Allaman, Melissa Muse, and Lex Gaillard

Photos of the authors

From left to right: Avi Gesser, Suchita Mandavilli Brundage, Samuel J. Allaman, Melissa Muse, and Lex Gaillard (photos courtesy of Debevoise & Plimpton LLP)

Last month, we wrote about how many companies were implementing a pilot program for ChatGPT, as a follow up to our article about companies adopting a policy for the work-related uses of generative AI tools like ChatGPT, Bard, and Claude (which we collectively refer to as “Generative AI”). We discussed how a pilot program often involves designating a small group of employees who test potential generative AI use cases, and then make recommendations to a cross-functional AI governance committee that determines (1) which use cases are prohibited and which are permitted, and (2) for the permitted use cases, what restrictions, if any, should apply.

Continue reading

Does Your Company Need a ChatGPT Pilot Program? Probably.

by , , and

Photos of the authors

Top row from left to right: Megan Bannigan, Avi Gesser, Henry Lebowitz, and Benjamin Leb
Bottom row from left to right: Jarrett Lewis, Melissa Muse, Michael R. Roberts, and Lex Gaillard
(Photos courtesy of Debevoise & Plimpton LLP)

Last month, we wrote about how many companies probably need a policy for Generative AI tools like ChatGPT, Bard and Claude (which we collectively refer to as “ChatGPT”). We discussed how employees were using ChatGPT for work (e.g., for fact-checking, first drafts, editing documents, generating ideas and coding) and the various risks of allowing all employees at a company to use ChatGPT without any restrictions (e.g., quality control, contractual, privacy, consumer protection, intellectual property, and vendor management risks). We then provided some suggestions for ways that companies could reduce these risks, including having a ChatGPT policy that organizes ChatGPT use cases into three categories: (1) uses that are prohibited; (2) uses that are permitted with some restrictions, such as labeling, training, and monitoring; and (3) uses that are generally permitted without any restrictions.

Continue reading

Does Your Company Need a ChatGPT Policy? Probably.

by Megan Bannigan, Avi Gesser, Henry Lebowitz, Anna Gressel, Michael R. Roberts, Melissa Muse, Benjamin Leb, Jarrett Lewis, Lex Gaillard, and ChatGPT

Photos of the authors

Top row left to right: Megan Bannigan, Avi Gesser, Henry Lebowitz, and Anna Gressel
Bottom row left to right: Michael R. Roberts, Melissa Muse, Benjamin Leb, and Jarrett Lewis

ChatGPT is an AI language model developed by OpenAI that was released to the public in November 2022 and already has millions of users. While most people were initially using the publicly available version of ChatGPT for personal tasks (e.g., generating recipes, poems, workout routines, etc.) many have started to use it for work-related projects. In this Debevoise Data Blog post, we discuss how people are using ChatGPT at their jobs, what are the associated risks, and what policies companies should consider implementing to reduce those risks.

Continue reading