Category Archives: Artificial Intelligence

Mitigating AI Risks for Customer Service Chatbots

by Avi Gesser, Jim PastoreMatt KellyGabriel KohanMelissa Muse and Joshua A. Goland  

photos of authors

Top left to right: Avi Gesser, Jim Pastore, and Matt Kelly. Bottom left to right: Gabriel Kohan, Melissa Muse and Joshua A. Goland (photos courtesy of Debevoise & Plimpton LLP)

Online customer service chatbots have been around for years, allowing companies to triage customer queries with pre-programmed responses that addressed customers’ most common questions. Now, Generative AI (“GenAI”) chatbots have the potential to change the customer service landscape by answering a wider variety of questions, on a broader range of topics, and in a more nuanced and lifelike manner. Proponents of this technology argue companies can achieve better customer satisfaction while reducing costs of human-supported customer service. But the risks of irresponsible adoption of GenAI customer service chatbots, including increased litigation and reputational risk, could eclipse their promise.

We have previously discussed risks associated with adopting GenAI tools, as well as measures companies can implement to mitigate those risks. In this Debevoise Data Blog post, we focus on customer service chatbots and provide some practices that can help companies avoid legal and reputational risk when adopting such tools.

Continue reading

The Luring Test: AI and the Engineering of Consumer Trust

by Michael Atleson

Federal Trade Commission

In the 2014 movie Ex Machina, a robot manipulates someone into freeing it from its confines, resulting in the person being confined instead. The robot was designed to manipulate that person’s emotions, and, oops, that’s what it did. While the scenario is pure speculative fiction, companies are always looking for new ways – such as the use of generative AI tools – to better persuade people and change their behavior. When that conduct is commercial in nature, we’re in FTC territory, a canny valley where businesses should know to avoid practices that harm consumers.

Continue reading

Prohibited AI Practices—A Deep Dive into Article 5 of the European Union’s AI Act

by Dr. Martin Braun, Anne Vallery, and Itsiq Benizri

photo of authors

From left to right: Dr. Martin Braun, Anne Vallery and Itsiq Benizri. (Photos courtesy of Wilmer Cutler Pickering Hale and Dorr LLP).

Article 5 of the AI Act essentially prohibits AI practices that materially distort peoples’ behavior or that raise serious concerns in democratic societies.

As explained in our previous blog post, this is part of the overall risk-based approach taken by the AI Act, which means that different requirements apply in accordance with the level of risk. In total, there are four levels of risk: unacceptable, in which case AI systems are prohibited; high risk, in which case AI systems are subject to extensive requirements; limited risk, which triggers only transparency requirements; and minimal risk, which does not trigger any obligations.

Continue reading

Semiconductor Chips and Cloud Computing: A Quote Book

by Staff at the Federal Trade Commission’s Office of Technology

The FTC’s Tech Summit on AI[1] highlighted three panels that reflect different layers of the AI tech stack – hardware and infrastructure, data and models, and front-end user applications. Here, we publish the first in a three-part series of “Quote Books” summarizing each of the three panels. This first quote book is focused on hardware and infrastructure, including semiconductor chips and cloud computing.

 

Continue reading

With The Fintech Sector’s Return to Explosive Growth, Here Are Top U.S. Legal Issues to Watch

by Jamillia Ferris, Vinita Kailasanath, Christine Lyon, Jan Rybnicek, and David Sewell

Left to right: Jamillia Ferris, Vinita Kailasanath, Christine Lyon, Jan Rybnicek, and David Sewell (photos courtesy of Freshfields Bruckhaus Deringer LLP)

Freshfields recently hosted a U.S. Fintech Hot Topics Webinar to highlight on-the-ground insights from our Antitrust and Competition, Data Privacy and Security, Financial Services Regulatory, and Transactional teams. The fintech sector has recently seen a return to explosive growth and is expected to continue growing rapidly notwithstanding regulatory and economic headwinds. Our top takeaways from the panel discussion are below, and the full recording is available here.

Continue reading

AI Enforcement Starts with Washing: The SEC Charges its First AI Fraud Cases

by Andrew J. Ceresney, Charu A. Chandrasekhar, Avi Gesser, Arian M. June, Robert B. Kaplan, Julie M. Riewe, Jeff Robins, and Kristin A. Snyder

Photos of authors

Top (left to right): Andrew J. Ceresney, Charu A. Chandrasekhar, Avi Gesser, and Arian M. June
Bottom (left to right): Robert B. Kaplan, Julie M. Riewe, Jeff Robins, and Kristin A. Snyder (photos courtesy of Debevoise & Plimpton LLP)

On March 18, 2024, the U.S. Securities and Exchange Commission (“SEC”) announced settled charges against two investment advisers, Delphia (USA) Inc. (“Delphia”) and Global Predictions Inc. (“Global Predictions”) for making false and misleading statements about their alleged use of artificial intelligence (“AI”) in connection with providing investment advice. These settlements are the SEC’s first-ever cases charging violations of the antifraud provisions of the federal securities laws in connection with AI disclosures, and also include the first settled charges involving AI in connection with the Marketing and Compliance Rules under the Investment Advisers Act of 1940 (“Advisers Act”). The matters reflect Chair Gensler’s determination to target “AI washing”—securities fraud in connection with AI disclosures under existing provisions of the federal securities laws—and underscore that public companies, investment advisers and broker-dealers will face rapidly increasing scrutiny from the SEC in connection with their AI disclosures, policies and procedures. We have previously discussed Chair Gensler’s scrutiny of AI washing and AI disclosure risk in Form ADV Part 2A filings. In this client alert, we discuss the charges and AI disclosure and compliance takeaways.

Continue reading

CFTC Year in Review: 23 Takeaways From 2023 and Predictions for 2024

by Matthew B. KulkinElizabeth L. Mitchell, Gretchen Passe Roin, Timothy F. Silva, Tiffany J. Smith, Dino WuMatthew Beville, and Joseph M. Toner

Photos of the authors

Top (left to right): Matthew B. Kulkin, Elizabeth L. Mitchell, Gretchen Passe Roin, and Timothy F. Silva
Bottom (left to right): Tiffany J. Smith, Dino Wu, Matthew Beville, and Joseph M. Toner (photos courtesy of Wilmer Cutler Pickering Hale and Dorr LLP)

At an industry event in early 2023, Commodity Futures Trading Commission (CFTC or the Commission) Chairman Rostin Behnam set out a comprehensive agenda.[1] When Chairman Behnam detailed the CFTC’s 2023 work plan, the CFTC was building on its first year with a full slate of Commissioners, new Division Directors, and senior leadership. As we look back on the recently completed calendar year and turn our attention to the rapidly approaching 2024 presidential and congressional elections, the CFTC seems poised for another year packed with a flurry of regulatory, policy, and enforcement activity. This article lays out 23 of our key takeaways from the past year and offers insights on what might take place in the coming months.

Continue reading

Blockchain Analytics: A Reliable Use of Artificial Intelligence for Crime Detection and Legal Compliance

by Sujit Raman and Thomas Armstrong

photos of authors

From left to right: Sujit Raman and Thomas Armstrong. (Photos courtesy of authors).

Everyone these days is talking about artificial intelligence and how to use it responsibly. Among law enforcement and compliance professionals, discussions around the responsible use of AI are nothing new. Even so, recent advances in machine learning have turbocharged AI’s transformative potential in detecting, preventing, and—in a particular sense—even predicting illicit activity. These advances are especially notable in the field of blockchain analytics: the process of associating digital asset wallets to real-world entities.

In a recent, pathbreaking opinion and order, U.S. District Judge Randolph Moss rejected a criminal defendant’s challenge to the government’s evidentiary use of blockchain analytics to link him to illicit financial activity.[1] Many courts—including, just a few days ago, a U.S. district court in Massachusetts[2]—have relied on the validity of blockchain analytics when taking pre-trial actions like issuing seizure orders and authorizing arrest warrants; Judge Moss’s opinion is the first trial court examination of this powerful analytic capability. Taken together, this growing body of legal authority forcefully affirms the reliability—and therefore admissibility in court—of evidence derived from such analytics.

Continue reading

EU AI Act Will Be World’s First Comprehensive AI Law

by Beth Burgin Waller, Patrick J. Austin, and Ross Broudy

Photos of authors

Left to right: Beth Burgin Waller, Patrick J. Austin, and Ross Broudy (photos courtesy of Woods Rogers Vandeventer Black PLC)

On March 13, 2024, the European Union’s parliament formally approved the EU AI Act, making it the world’s first major set of regulatory ground rules to govern generative artificial intelligence (AI) technology. The EU AI Act, after passing final checks and receiving endorsement from the European Council, is expected to become law in spring 2024, likely May or June.

The EU AI Act will have a phased-in approach. For example, regulations governing providers of generative AI systems are expected to go into effect one year after the regulation becomes law, while prohibitions on AI systems posing an “unacceptable risk” to the health, safety, or fundamental rights of the public will go into effect six months after the implementation date. The complete set of regulations in the EU AI Act are expected to be in force by mid-2026.

Continue reading

Recent Regulatory Announcements Confirm Increased Scrutiny of “AI-Washing”

by Tami Stark, Courtney Hague AndrewsMaria Beguiristain, Joel M. Cohen, Daniel Levin, Darryl Lew, and Marietou Diouf

Photos of authors

Top (left to right): Tami Stark, Courtney Hague Andrews, Maria Beguiristain, and Joel M. Cohen
Bottom (left to right): Daniel Levin, Darryl Lew, and Marietou Diouf (Photos courtesy of White & Case LLP)

In December 2023, we published an alert concerning US Securities and Exchange Commission (“SEC”) Chair Gary Gensler’s warning to public companies against “AI washing” – that is, making unfounded claims regarding artificial intelligence (“AI”) capabilities.[1] It is no surprise that since then regulators and the US Department of Justice (“DOJ”) have repeated this threat and the SEC publicized an AI related enforcement action that typically would not get such emphasis.

In January 2024, the SEC’s Office of Investor Education and Advocacy issued a joint alert with the North American Securities Administrators Association and the Financial Industry Regulatory Authority warning investors of an increase in investment frauds involving the purported use of AI and other emerging technologies.[2] Similarly, the Commodity Futures Trading Commission Office of Customer Education and Outreach issued a customer advisory warning the public against investing in schemes touting “AI-created algorithms” that promise guaranteed or unreasonably high returns.[3]

Continue reading