by Avi Gesser, Anna R. Gressel, and Parker C. Eudy
This post is Part IV of a five-part series by the authors on The Future of AI Regulation. For Part I, discussing U.S. banking regulators’ recent request for information regarding the use of AI by financial institutions click here. For Part II, outlining key features of the EU’s draft AI legislation, click here. For Part III, discussing new obligations for companies under the EU’s draft AI legislation, click here.
In this installment, we discuss the Federal Trade Commission’s (“FTC”) recent blog post entitled “Aiming for truth, fairness, and equity in your company’s use of AI,” which was released on April 19, 2021.
The FTC’s Blog Post on Truth, Fairness, and Equity in AI
The FTC’s blog post follows the Commission’s guidance issued in 2020 on “Using Artificial Intelligence and Algorithms,” which we previously discussed on our webcast with Andrew Smith, head of the FTC’s Bureau of Consumer Protection. As Mr. Smith emphasized, the FTC’s enforcement actions and guidance both emphasize that the use of AI should be transparent, explainable, fair, empirically sound, and accountable. More recently, FTC Commissioner Rebecca Kelly Slaughter remarked (PDF: 139 KB) that “[i]ncreased accountability means that companies—the same ones who benefit from the advantages and efficiencies of algorithms—must bear the responsibility of (1) conducting regular audits and impact assessments, and (2) facilitating appropriate redress for erroneous or unfair algorithmic decisions.”
The FTC’s new post may be a preview of its approach to AI enforcement in the Biden Administration. In contrast to EU’s lengthy and comprehensive draft legislative framework, which proposes an array of new AI regulations, the FTC’s two-page document focuses on how existing U.S. laws prevent the use of AI that is biased or unfair. According to the FTC, those laws include:
- Section 5 of the FTC Act prohibits unfair or deceptive practices, which the FTC notes, includes the sale or use of racially biased algorithms.
- The Fair Credit Reporting Act, which prohibits the use of AI to unfairly deny people employment, housing, credit, insurance, or other benefits.
- The Equal Credit Opportunity Act, as well as its implementing Regulation B, which prohibits the use of a biased algorithm that results in credit discrimination based on protected classes, such as race or sex.
Drawing on past hearings, investigations, and enforcement actions, the FTC offers the following seven lessons on using AI truthfully, fairly, and equitably:
- Use complete and representative data sets to design AI models. If a data set is missing information from particular populations, using that data to build an AI model may yield results that are unfair or inequitable to legally protected groups.
- Test algorithms for discriminatory outcomes before using them and periodically thereafter.
- Make your use of AI transparent and available for independent reviews by using transparency frameworks and independent standards, by conducting and publishing the results of independent audits, and by opening your data or source code to outside inspection.
- Don’t exaggerate about what your algorithm can do or whether it can deliver fair or unbiased results.
- Be truthful and upfront about how you use data. A business’s AI model shouldn’t derive from consumer data unless the business was authorized to collect and use such data.
- Use AI models that do more good than harm to consumers. The FTC may challenge a business’s use of an AI model “if it causes or is likely to cause substantial injury to consumers that is not reasonably avoidable by consumers and not outweighed by countervailing benefits to consumers or to competition.”
- Take accountability for how your AI models perform. The FTC indicates in the blog post that it will take action against businesses using algorithms that it determines are biased and result in credit discrimination.
Recent Enforcement Actions and the Ability of the FTC to Destroy Models
In discussing the proper use of personal data to train AI models, the FTC references its settlement with the photo app developer Everalbum, Inc., which we discussed in detail in a previous blog post. In its complaint (PDF: 139 KB), the FTC alleged that Everalbum represented to its users (i) that they must affirmatively opt in to enable the app’s facial recognition settings, and (ii) that Everalbum deleted users’ photos and videos whenever users deactivated their accounts. Both of these representations, the FTC alleged, were false and deceptive. As part of the settlement, Everalbum was required to delete the data that it had collected and retained without users’ consent. More importantly, the settlement also required the destruction of any facial recognition models or algorithms that Everalbum developed using users’ photos and videos that were collected through deceptive means. As we noted in our previous blog post, this is a very powerful enforcement tool in AI cases.
The FTC’s guidance coincides, however, with the Supreme Court’s recent decision (PDF: 127 KB) curtailing the FTC’s remedial authority to seek monetary relief. In a unanimous decision issued on April 22, 2021, the Supreme Court ruled that Section 13(b) of the FTC Act does not grant the FTC the authority to recover restitution or disgorgement for ill-gotten gains in civil enforcement actions. As we discussed in a recent blog post, the ruling’s limitation on monetary relief may impact settlement negotiations and other forms of relief that the FTC seeks in future enforcement actions. The FTC may, for example, start relying more on other forms of redress, such as financial recovery through the administrative process, injunctive relief through court orders, and settlements requiring destruction of algorithms developed from biased data and of data collected through deceptive means.
Algorithmic Discrimination and Unfairness Laws
Perhaps the most notable part of the FTC’s blog post is its warning to companies that “[i]f your model causes more harm than good – that is, in Section 5 parlance, if it causes or is likely to cause substantial injury to consumers that is not reasonably avoidable by consumers and not outweighed by countervailing benefits to consumers or to competition – the FTC can challenge the use of that model as unfair.”
In addition to the FTC, the authority to prevent unfairness provides a potential avenue for AI enforcement to state attorneys general charged with enforcing their Unfair and Deceptive Acts and Practices (“UDAP”) statutes. Some of these state consumer protection laws include private rights of action, which may present opportunities to private plaintiffs seeking to challenge allegedly discriminatory, deceptive or unfair uses of AI.
Takeaways
The FTC’s blog post is consistent with the current approach that we’ve seen from U.S. regulators on AI, which is to:
- Gather information from companies through RFIs or regulatory exams on their use of AI and the measures they are implementing to reduce bias and other risks;
- Remind companies that existing laws apply to AI, and that no new regulation is needed for them to bring enforcement actions against companies that use AI that is biased against protected classes or that use data in violation of privacy obligations; and
- Issue guidance on what they view as uses of AI that violate existing laws and bring enforcement actions against those companies that act contrary to that guidance.
Avi Gesser is a partner, and Anna R. Gressel and Parker C. Eudy are associates, at Debevoise & Plimpton LLP. This post originally appeared on Debevoise’s Data Blog.
Disclaimer
The views, opinions and positions expressed within all posts are those of the authors alone and do not represent those of the Program on Corporate Compliance and Enforcement or of New York University School of Law. The accuracy, completeness and validity of any statements made within this article are not guaranteed. We accept no liability for any errors, omissions or representations. The copyright of this content belongs to the authors and any liability with regards to infringement of intellectual property rights remains with them.