FTC Warns Companies about Generative AI

by Kirk J. Nahra, Arianna Evers, Ali A. Jessani, and Roma Gujarathi

Photos of the authors

From left to right: Kirk J. Nahra, Arianna Evers, Ali A. Jessani, and Roma Gujarathi (Photos courtesy of Wilmer Cutler Pickering Hale and Dorr LLP)

On May 1, the Federal Trade Commission (FTC) released a blog post cautioning companies about the use of generative AI tools to change consumer behavior. Generative AI is a subset of AI that can generate new text, images, and other media based on patterns learned from existing data. The machine-generated content often feels authentic and realistic and is convincingly similar to that of a real human.

This FTC guidance is significant because the agency makes clear that manipulative use of generative AI can be illegal even if not all customers are harmed and even if those harmed do not comprise a class of people protected by anti-discrimination laws. Furthermore, the blog post indicates that the agency is carefully scrutinizing AI products under all prongs of the agency’s authority under the FTC Act.  

FTC has previously focused on AI-related deception, such as companies making unsubstantiated claims for AI products or the use of generative AI for fraud. In this recent post, though, the agency also highlights the unfairness prong of the FTC’s authority, noting that a practice is unfair if it causes more harm than good. The FTC is aware of generative AI steering people to make harmful decisions in areas such as finance, health, education, housing, and employment and the agency is focusing intensely on the substantial consumer impact of these technologies across these broad sectors.

Human-like Interactions and The Risk of Unearned Consumer Trust

In evaluating the risks of generative AI, the FTC uses the example of chatbots designed to provide information, advice, support, and companionship, noting that companies use such tools to influence consumers’ beliefs, emotions, and behavior. Many of these chatbots are built to persuade and are designed to answer questions in confident language, even when answers might be fictional. 

In addition, machines are becoming increasingly human-like by using personal pronouns and emojis in their output responses. This type of system design results in people potentially placing undue trust in these machines. Automation bias refers to people’s tendency to over-rely on automated systems and machines because their answers are designed to seem neutral or impartial.  Generative AI tools can build consumer trust and lead consumers to believe that the tool understands them as a real human would.

Generative AI Design and Consumer Manipulation

The FTC alerts organizations to be cognizant of design elements that could trick consumers, warning that these have been a common element in recent FTC actions where system design elements manipulate consumers into making harmful choices. Manipulation can be unfair or deceptive under the FTC Act when it causes people to take actions contrary to their intended goals. 

The agency specifically focuses on new uses of generative AI, such as customizing ads to specific audiences and placing ads within a generative AI feature. The FTC notes that it has consistently provided guidance on online ads and avoiding deception or unfairness, including work related to dark patterns and native advertising. 

Key Takeaways for Businesses

For companies using generative AI tools, the FTC provides clear guidance in this blog post:

  1. It should always be clear that an ad is an ad. 
  2. People should know whether they’re communicating with a real person or a machine. 
  3. Companies should be aware of downstream uses of generative AI tools and should train employees accordingly.
  4. Any generative AI output should distinguish clearly between what is organic and what is paid. 
  5. Consumers should know if an AI product’s response is steering them to a particular website or product because of a commercial relationship. 
  6. Companies should monitor and address the actual use and impact of any generative AI tools they use.

Kirk J. Nahra is a Partner, Arianna Evers is Special Counsel, Ali A. Jessani is a Senior Associate, and Roma Gujarathi is an Associate at Wilmer Cutler Pickering Hale and Dorr LLP. This post first appeared on the firm’s blog.

The views, opinions and positions expressed within all posts are those of the author(s) alone and do not represent those of the Program on Corporate Compliance and Enforcement (PCCE) or of the New York University School of Law. PCCE makes no representations as to the accuracy, completeness and validity or any statements made on this site and will not be liable any errors, omissions or representations. The copyright or this content belongs to the author(s) and any liability with regards to infringement of intellectual property rights remains with the author(s).