by Julia Solomon Ensor
Maybe you grew up daydreaming about artificial intelligence, or AI. You imagined its potential to change the future, possibly with an army of helpful robots to take on your least favorite human tasks. The Star Wars franchise had R2-D2. The Jetsons had Rosey. There was RoboCop. And when everything else was gone, the world had WALL-E, the stoic trash collector looking for love. Now, as a business owner, you’re always watching for the next big invention to fine-tune processes and increase profitability. And some marketers can’t resist taking advantage of that by using the language of AI and technology to try to make it seem like their products or services deliver all the answers.
Today, as part of Operation AI Comply, the FTC is announcing five cases exposing AI-related deception. First, we have four settlements involving allegedly deceptive claims about AI-driven services, three of which are oldies but not-so-goodies: deceptive business opportunity scams that claim to use AI to help people earn more money, faster. We also have a settlement involving a company that offered a generative AI tool that let people create fake consumer reviews. Here’s what you need to know:
- DoNotPay: An FTC complaint claims U.K.-based DoNotPay told people its online subscription service acts as “the world’s first robot lawyer” and an “AI lawyer” by using a chatbot to prepare “ironclad” documents for the U.S. legal system. The complaint also says DoNotPay told small businesses its service could check their websites for law violations and help them avoid significant legal fees. In fact, according to the complaint, DoNotPay’s service didn’t live up to the hype. You’ll have 30 days to comment on a proposed settlement between FTC and DoNotPay, which requires DoNotPay to stop misleading people, pay $193,000, and tell certain subscribers about the case.
- Ascend Ecom: An FTC complaint filed in California alleges a group of companies and their officers used deceptive earnings claims to convince people to invest in “risk free” business opportunities supposedly powered by AI. Then, when things went sour, the FTC says, the defendants refused to honor their “risk free” money-back guarantees, and threatened and intimidated people to keep them from publishing truthful reviews. According to the complaint, the defendants’ conduct violated the FTC Act, the Business Opportunity Rule, and the Consumer Review Fairness Act.
- Ecommerce Empire Builders: In a complaint filed in Pennsylvania, the FTC claims a company and its officer violated the FTC Act and the Business Opportunity Rule with their AI-infused earnings claims. According to the complaint, in addition to failing to provide required statements and disclosures, the defendants promised people they would quickly earn thousands of dollars a month in additional income by following proven strategies and investing in online stores “powered by artificial intelligence.” The complaint also says the defendants made clients sign contracts keeping them from writing and posting negative reviews, in violation of the Consumer Review Fairness Act.
- FBA Machine: In June, the FTC filed a complaint against a group of New Jersey-based businesses and their owner, claiming they used deceptive earning claims to convince people to invest in a “surefire” business opportunity supposedly powered by AI. According to the complaint, the defendants promised people they could earn thousands of dollars in passive income. Then, the FTC says, the defendants threatened people who tried to share honest reviews, and told people they couldn’t get refunds unless they withdrew their complaints. According to the FTC, through these tactics, which violate the FTC Act, the Business Opportunity Rule, and the Consumer Review Fairness Act, the defendants defrauded their customers of more than $15.9 million. The case is ongoing.
- Rytr: According to an FTC complaint, a Delaware-based company sold an AI-enabled writing assistant with a tool specifically designed for its customers to generate online reviews and testimonials. The problem? The complaint says Rytr customers could, with little input, generate an unlimited number of reviews with specific details that would almost certainly not be true for those users. According to the complaint, some Rytr customers used this tool to quickly generate thousands of false reviews that would have tricked people reading those reviews online. This, the FTC says, likely harmed many people and was unfair. Rytr has agreed to a proposed settlement prohibiting the company – or anyone working with it – from advertising or selling any service promoted for generating reviews. You can submit your public comments for 30 days.
So, what’s the takeaway?
First, if you’re looking for AI-based tools to use in your business:
- Be skeptical of AI-related products that claim they can fully replace a qualified human professional. Firms are rushing to claim that their AI tools can replace the work of doctors, lawyers, and accountants. But when it comes to legal or financial advice, small mistakes lead to big problems, and not every firm can back up their claims with tools that are actually equipped to address complicated, fact-intensive cases. AI tools can be a good starting point, but be skeptical of claims that they can fully replace a professional.
- Don’t take shortcuts on reviews. You know people read reviews before buying your product or service, and you might be tempted to use AI tools to help you fake your way to five stars. Don’t do it. By posting fake reviews you betray your customers’ trust and hurt honest businesses trying to compete fairly. And, you may violate the FTC Act and the FTC’s Rule on the Use of Consumer Reviews and Testimonials in the process. We’ve published a guide with tips on soliciting and paying for online reviews. Check it out and avoid problems.
Second, if you’re tempted to mention AI in ads to boost sales:
- Don’t say you use AI tools if you don’t. Easy enough, right? If you’re investigated by the FTC, our technologists and others can look at your product or service and figure out what’s really going on. Be aware that just using an AI tool when you’re developing your product is not the same as offering your customers a product with AI inside. Tell the truth.
- A lie in robot’s clothing is still a lie: the same old advertising principles apply. The FTC expects you to have a reasonable basis for any claim you make about your product or service. If special rules apply to the product or service you offer, like business opportunities, you must follow those as well. Don’t think using technological jargon or saying your product or program relies on AI changes the analysis. When it comes to business opportunities, if the FTC investigates you, we’re going to check to see whether you can back up your earnings and other claims, and whether you’re supplying appropriate disclosures to your customers. Don’t claim your program uses new technologies to help people make more money unless that’s true. Have concrete data demonstrating what you’re promising is typical for your customers.
Finally, be informed:
- These cases are just the latest in the FTC’s ongoing work to combat AI-related issues in the marketplace from every angle. We’re checking to see whether products or services actually use AI as advertised, if so, whether they work as marketers say they will. We’re examining whether AI and other automated tools are being used for fraud, deception, unfair manipulation, or other harmful purposes. On the back end, we’re looking at whether automated tools have biased or discriminatory impacts. You can read about other cases here: Automators, Career Step, NGL Labs, Rite Aid, CRI Genetics.
- For more information on how to avoid getting swept up in one of our law enforcement efforts, check out the FTC’s AI and Your Business series:
- Keep your AI claims in check
- Chatbots, deepfakes, and voice clones: AI deception for sale
- The Luring Test: AI and the engineering of consumer trust
- Watching the detectives: Suspicious marketing claims for tools that spot AI-generated content
- Can’t lose what you never had: Claims about digital ownership and creation in the age of generative AI
- Succor borne every minute
Julia Solomon Ensor is an Attorney in the FTC’s Bureau of Consumer Protection. This post first appeared on the FTC’s Business Blog.
The views, opinions and positions expressed within all posts are those of the author(s) alone and do not represent those of the Program on Corporate Compliance and Enforcement (PCCE) or of the New York University School of Law. PCCE makes no representations as to the accuracy, completeness and validity or any statements made on this site and will not be liable any errors, omissions or representations. The copyright of this content belongs to the author(s) and any liability with regards to infringement of intellectual property rights remains with the author(s).