FTC Announces New Enforcement Initiative Targeting Deceptive AI Practices

by Robert A. Cohen, James W. Haldin, Daniel S. Kahn, Maude Paquin, and Michael Scheinkman

Photos of the authors

Left to right: Robert A. Cohen, James W. Haldin, Daniel S. Kahn, Maude Paquin, and Michael Scheinkman (Photos courtesy of Davis Polk & Wardwell LLP)

The Federal Trade Commission launched Operation AI Comply, announcing enforcement actions against five companies for alleged deception regarding artificial intelligence.  The FTC’s actions mark the latest U.S. scrutiny of AI-related misconduct. 

Background

On September 25, 2024, as part of a new enforcement “sweep” called Operation AI Comply, the FTC announced enforcement actions against five companies that allegedly used artificial intelligence (AI) to “supercharge deceptive or unfair conduct that harms consumers.”  According to the FTC, these cases showcase how “hype surrounding AI” is used to “lure consumers into bogus schemes” and to provide AI-based tools that themselves can be used to deceive consumers.  In announcing the actions, FTC Chair Lina Khan stated that “[t]he FTC’s enforcement actions make clear that there is no AI exemption from the laws on the books.”

The sweep is the latest development in the FTC’s continued focus on deceptive schemes involving AI.  In January, the FTC hosted a Tech Summit on AI, where Commissioner Rebecca Slaughter emphasized that the FTC would leverage “the full panoply of [its] statutory tools” to understand the “incentives and consequences” of AI and pursue enforcement actions, where appropriate.  And earlier this month, the Director of the Bureau of Consumer Protection, Samuel Levine, reiterated that the FTC is “taking a proactive approach to addressing AI-related harms,” referring to a February settlement against three individuals and related entities for defrauding customers as part of an ecommerce money-making scheme that the company claimed was “powered by artificial intelligence.”

Operation AI Comply

As part of Operation AI Comply, the FTC announced enforcement actions against five companies (and related individuals).  Four of the actions were unanimously authorized by the Commission and alleged that companies failed to live up to claims they made regarding their use of AI, a tactic known as “AI washing,” with Commissioner Andrew Ferguson noting that the FTC’s actions sought to hold the companies “to the same standards for honest-business conduct that apply to every industry.”  The fifth and final action involved claims by the FTC that a company’s AI technology could potentially be used to mislead consumers, but lacked any allegations that consumers were actually misled, drawing a strong rebuke from two dissenting Commissioners.  All of the complaints allege unfairness and/or deception in violation of Section 5 of the FTC Act; for defendants that did not resolve the charges, the FTC also alleged violations of the Consumer Review Fairness Act (15 U.S.C. § 45b) and the Business Opportunity Rule (16 C.F.R. Part 437). 

  • False claims about AI services. Four of the actions involved companies that allegedly made false claims about their AI services:
    • The FTC alleged that DoNotPay, a company offering a service it claimed to be “the world’s first robot lawyer” that would “replace the $200-billion-dollar legal industry with artificial intelligence,” falsely promised that its AI tools could generate valid legal documents and detect legal violations that small businesses may have had on their website, when in actuality its tools were untested and ineffective and it had not hired or retained any attorneys. DoNotPay settled the claims for $193,000 and agreed to send notices to consumers who had subscribed to its service warning of its limitations.
    • The FTC alleged that Ascend Ecom, Ecommerce Empire Builders, and FBA Machine falsely claimed that users could quickly earn money by opening online storefronts on various ecommerce platforms utilizing their AI technology. In each instance, the FTC obtained temporary injunctive relief while it pursues its claims in federal court.
  • Potential for AI technology to defraud. The fifth action resolved claims against Rytr, a company offering an AI “writing assistant” with over 40 use cases, including the generation of content for consumer reviews.  The FTC did not allege that any of the consumer reviews were made publicly available in the form generated by the tool or resulted in any harm to consumers.  Rather, the FTC alleged that Rytr’s tool provided the “means and instrumentalities” to produce AI-generated content—in the form of potentially misleading consumer reviews—that could have been used to deceive consumers.  Based on that theory, the FTC’s proposed order bans Rytr from providing any AI service to generate consumer reviews or testimonials in the future, though it continues to permit other use cases. 
    • Dissents. Commissioners Melissa Holyoak and Ferguson dissented from the decision, challenging it as an unjustified expansion of FTC precedent that would stifle innovation and ultimately harm consumers.  Commissioner Holyoak’s dissent warned against advancing legal theories via settlement that are unlikely to win in court, highlighting that the complaint did not contain any allegations that Rytr “deceived []or caused injury to a consumer,” and failed to properly account for potential consumer benefits associated with the functionality offered by the tool.  Similarly, Commissioner Ferguson’s dissent noted that the FTC’s legal theory represents an unwarranted and dramatic expansion of the historical application of “means-and-instrumentalities” liability, which has traditionally involved products or practices that are inherently deceptive and required a showing that the provider of the offending product or service had knowledge of, or reason to expect, misuse.  Commissioner Ferguson expressed particular concern regarding the lack of a limiting principle in the majority’s theory:  by their logic, any product or service—pencils, papers, computers, billboards—that could potentially be used to write a statement that could deceive consumers could subject the product maker to liability.  More broadly, both Commissioners observed that the decision could stifle innovation in the AI space.

Broader AI Enforcement

The FTC’s Operation AI Comply is further evidence of increasing scrutiny of AI-related misconduct.  Other U.S. authorities—most notably, the Securities and Exchange Commission (SEC), the Department of Justice (DOJ), and various State Attorneys General (State AGs)—have warned about the risks of AI misuse and are increasingly pursuing enforcement actions against alleged wrongdoers.  

The SEC has repeatedly warned against “AI washing” and inaccurate AI disclosures.  In December 2023 and March 2024 speeches, SEC Chair Gary Gensler cautioned against “AI washing” by misleading investors as to a company’s true AI capabilities, emphasizing that securities laws require “full, fair and truthful disclosure.”  Recent SEC enforcement actions underscore this focus:

  • In March 2024, the SEC announced settlements with two investment advisers, Delphia and Global Predictions, for allegedly making false and misleading statements about AI-based capabilities they did not have. The companies settled for a combined $400,000 in civil penalties.
  • In June 2024, the SEC announced charges against the founder and CEO of an AI-based recruitment start-up, Joonko Diversity, Inc., for allegedly misleading investors by making exaggerated claims that the company used “machine learning” and “AI-based technology” as an “automated recruiting solution.”

The DOJ has similarly signaled an increased focus on the impact of AI on its enforcement efforts.  In February, Deputy Attorney General Lisa Monaco announced a new initiative, Justice AI, to convene experts from academia, science, and industry “to understand and prepare for how AI will affect the Department’s mission and how to ensure we accelerate AI’s potential for good while guarding against its risks.”  In the same speech, she warned that the DOJ will utilize existing legal frameworks to pursue AI-related wrongdoing and that “our enforcement must be robust.”  To that end, Deputy Attorney General Monaco announced that “where prosecutors can seek stiffer sentences for offenses made significantly more dangerous by the misuse of AI—they will.”  And most recently, DOJ updated its corporate compliance guidance in September to emphasize the importance of evaluating and managing AI-related risk.

At the state level, State AGs have followed suit, with several—including Texas, Massachusetts, and California—warning that companies employing AI must ensure those uses comply with existing laws.

Key Takeaways

Recent AI enforcement actions from the FTC and other U.S. authorities offer several key takeaways:

  • Transparency. U.S. authorities are closely scrutinizing companies’ statements regarding their use of AI technology.  Companies should avoid “AI washing” by ensuring that any such statements are subject to their typical disclosure controls, accurately reflect their capabilities, and are precise about how they are using AI.  The FTC has also issued practical guidance for ensuring AI-related claims are accurate and supportable.
  • Adequate disclosures. In filings and reports, companies should communicate material risks associated with their use of AI-technology and monitor SEC guidance on such disclosures.  Separately, companies should also consider providing consumer-facing disclaimers regarding the limitations of their AI-services and disclosing their practices regarding the use of customer data in their AI technology, as well as the controls they have in place to help ensure that those practices are followed.
  • New application, same laws. As FTC Chair Khan stated in the announcement, “there is no AI exemption from the laws on the books.”  Companies should treat their implementation of AI as they do other areas of business, which require due diligence, testing, oversight, and disclosures.  Companies should also consider establishing internal policies to govern use of AI, educating their boards on applicable disclosure requirements, and should stay up to date on guidance from agencies.

Robert A. Cohen, James W. Haldin, Daniel S. Kahn, and Michael Scheinkma are Partners, and Maude Paquin is Counsel at Davis Polk & Wardwell LLP. The article was first distributed by the firm as a client update. 

The views, opinions and positions expressed within all posts are those of the author(s) alone and do not represent those of the Program on Corporate Compliance and Enforcement (PCCE) or of the New York University School of Law. PCCE makes no representations as to the accuracy, completeness and validity or any statements made on this site and will not be liable any errors, omissions or representations. The copyright of this content belongs to the author(s) and any liability with regards to infringement of intellectual property rights remains with the author(s).