Federal Agencies Will Jointly Look for Bias and Discrimination in AI

by Bradford Hardin, K.C. Halm, Aisha Smith, and Matt Jedreski

Photos of the authors

From left to right: Bradford Hardin, K.C. Halm, Aisha Smith, and Matt Jedreski (photos courtesy of David Wright Tremaine LLP)

DOJ, FTC, CFPB, and EEOC Announce Joint Commitment to Use Existing Consumer Protection and Employment Authority to Oversee Use of Artificial Intelligence

On April 25, 2023, the Federal Trade Commission (FTC), the Civil Rights Division of the U.S. Department of Justice (DOJ), the Consumer Financial Protection Bureau (CFPB), and the U.S. Equal Employment Opportunity Commission (EEOC) released a joint statement highlighting their commitment to “vigorously use [their] collective authorities to protect individuals” with respect to artificial intelligence and automated systems (AI), which have the potential to negatively impact civil rights, fair competition, consumer protection, and equal opportunity. These regulators intend to use their existing authority to enforce consumer protection and employment laws, which apply regardless of the technology used for making decisions or delivering products and services. The joint statement outlines several key areas of focus for the agencies – ensuring that AI does not result in discriminatory outcomes, protecting consumers from unfair, deceptive, or abusive acts or practices (UDAAP), preventing anticompetitive practices that may be facilitated or exacerbated by AI, and promoting responsible and transparent development of AI systems. Rather than operate as if AI is unregulated, businesses should ensure their use of AI complies with existing laws and regulations.

Previous Actions by the Agencies Involving AI Applications and Systems

The joint statement references prior efforts from each agency to actively monitor developments and, in some instances, take enforcement or advisory action related to AI. The CFPB has issued guidance clarifying that consumer protection laws apply to financial products and services regardless of the technology being used. The FTC has also been active in this space, requiring firms to destroy algorithms or work product that were generated using data improperly collected. The FTC initiated a rulemaking focused on the use of AI and automated systems (amongst other things). The DOJ issued statements of interest and pursued settlements over alleged biases in certain screening systems. And finally, the EEOC’s strategic enforcement plan includes efforts to eliminate barriers in recruitment and hiring as a result of employers’ increased use of automated systems. The EEOC has also highlighted its focus on accessibility issues as “low-hanging fruit,” and the soon-to-be-in-effect New York City law regarding the use of AI systems in hiring requires organizations to provide accommodations to those who are unable or unwilling to use the systems. These actions highlight the agencies’ proactive focus on promoting fair and transparent use of AI and signals that regulatory objectives related to AI have been addressed in existing laws and regulations.

Areas of Likely Focus and Potential Enforcement Activity

The joint statement suggests that areas of focus and potential enforcement activity include:

  1. Insufficient or faulty data – One of the main concerns with AI and automated systems is the use of biased or unrepresentative data sets in the model development and training process. This can lead to AI systems producing discriminatory or unfair outcomes, even if unintentional. Enforcement actions in this area could target organizations that fail to ensure their AI systems are trained on representative and unbiased data or that do not take adequate measures to detect and mitigate potential biases in their AI-driven processes and decision-making.
  2. Transparency and explainability – Another area of focus is the need for AI systems to be transparent and explainable. AI-driven decisions can significantly impact individuals and communities, and regulators expect that those designing the systems understand how they work, and those affected are provided sufficient information to understand the basis for these decisions. Enforcement concerns may arise if organizations fail to provide sufficient explanations for AI-generated outcomes or if they do not disclose how their AI systems work and which inputs they rely upon. This could include addressing “black box” AI models that do not provide clear insight into their decision-making processes or the factors that influence their results.
  3. Flawed design or use – The design and implementation of AI systems can play a critical role in determining whether they produce biased or discriminatory outcomes. Enforcement actions in this area could target organizations that design AI without appropriate controls to address potential risks and consequences, or that use AI in ways that lead to biased or discriminatory results. This may include cases where AI systems are designed to optimize a specific outcome without considering potential side effects, or where they are used in contexts for which they were not originally intended or validated, leading to unintended consequences.
  4. Accessibility – While it may be difficult for regulators and consumers to see how an AI system works and whether it produces problematic results, it is far easier to determine whether individuals with disabilities are able to access opportunities governed by AI systems. For example, if an organization uses AI-driven video interviews or scored games to assist in hiring, do applicants with visual, auditory, or other disabilities have the same access to those job opportunities? Potential enforcement activity may result from failure to provide alternative methods of access to AI-driven resources and services.

To manage risks associated with use of AI, companies should plan to expand their current regulatory risk assessments and other compliance management systems, enhancing them as needed to detect, prevent, and remediate risks stemming from automated systems.

National AI Policy and Regulatory Developments

The joint statement comes amidst an accelerating debate over a potential national AI policy and regulations. Recent actions within Congress and the Administration reveal a heightened interest to impose new federal rules on AI systems and technology. Some of these actions include Senator Schumer’s plans to introduce legislation regulating AI and related technologies and the White House’s proposed AI Bill of Rights. The recently published NIST AI Framework and proposed rulemaking from the FTC also aim to promote certainty and consistency in the use of AI. Finally, the Department of Commerce’s National Telecommunications and Information Administration (NTIA) has requested public input on AI “accountability” tools and practices, as part of its ongoing efforts to develop policy recommendations for AI governance.

These efforts are driven, in part, by actions in Europe, China, and other regions to adopt and implement national AI policies.

Conclusion

The joint statement from the DOJ, FTC, CFPB, and EEOC signifies a growing awareness and concern among federal agencies about the potential risks and challenges posed by AI and automated systems. As AI continues to become more integrated into all aspects of daily life, the importance of addressing potential biases, transparency issues, and flawed design becomes increasingly critical. The joint statement serves as a reminder that innovation should be pursued responsibly and that there is no exemption from the law for AI technologies.

With an ongoing national debate on AI policy and regulations, it remains to be seen what additional measures may be taken to ensure the responsible development and use of AI systems. Nevertheless, the joint commitment by these federal agencies demonstrates a proactive approach and sets the stage for continued collaboration in regulatory enforcement.

Bradford Hardin and K.C. Halm are Partners and Aisha Smith and Matt Jedreski are Counsel at Davis Wright Tremaine LLP. This post first appeared on the firm’s blog.

The views, opinions and positions expressed within all posts are those of the author(s) alone and do not represent those of the Program on Corporate Compliance and Enforcement (PCCE) or of the New York University School of Law. PCCE makes no representations as to the accuracy, completeness and validity or any statements made on this site and will not be liable any errors, omissions or representations. The copyright or this content belongs to the author(s) and any liability with regards to infringement of intellectual property rights remains with the author(s).