by William Savitt, Mark F. Veblen, Kevin S. Schwartz, Noah B. Yavitz, and Courtney D. Hauck
On Monday, the Biden Administration issued a long-awaited executive order on artificial intelligence, directing agencies across the federal government to take steps to respond to the rapid expansion in AI technology. The order attempts to fill a gap in national leadership on AI issues, with Congress showing little progress on any comprehensive legislation. The order mandates regulatory action that could affect companies throughout the domestic economy, including:
- Risk management: In January 2023, the White House released a voluntary AI Risk Management Framework to assist private organizations in addressing AI-related risk. The new executive order requires a range of federal agencies to evaluate AI risk within the key industries they oversee, after which regulators must publish guidelines for private companies within those industries to incorporate the Framework. The Treasury Department, for example, has been given until March 2024 to publish a report on how the banking sector can manage cybersecurity risk flowing from the use of AI tools. The Commerce Department has been given until June 2024 to help reduce the risks posed by AI-based synthetic content (including “deep fakes” such as cloning of the President’s voice, which earned mention in his remarks introducing the order), by identifying existing standards and proposed the development of new science-backed standards and techniques for authenticating, detecting, labeling, and, where appropriate, preventing synthetic content. Importantly, the Commerce Department has also been given 90 days to use the Defense Production Act to require companies developing certain AI models with national security implications to provide ongoing reports to the federal government that address the physical and cybersecurity protections taken to assure the model’s integrity against sophisticated threats.
- AI-driven discrimination: The order also broadly directs agencies to evaluate the potential for AI-driven discrimination, building on the Equal Employment Opportunity Commission’s recent designation of such discrimination as an enforcement priority (discussed here). The Department of Health and Human Services will examine the potential for AI-based discrimination in healthcare, while the Consumer Financial Protection Bureau is encouraged and the Department of Housing and Urban Development required to provide guidance on how existing laws banning discrimination in lending and housing, respectively, govern the use of AI technology.
- Marketplace competition: In parallel with recent competition enforcement efforts against large technology enterprises, the order encourages federal agencies to focus on concentration of market power and endeavor to foster competition in the AI marketplace. This may include future rule-making by the FTC, whose chair and staff have already singled out AI as an area of possible anticompetitive behavior.
- Workforce management: As expected, the Biden Administration has also included labor-friendly provisions designed to shield workers from perceived abuse of AI by employers. In addition to precatory language cautioning against the use of AI to “undermine rights, worsen job quality, encourage undue worker surveillance, lessen market competition, introduce new health and safety risks, or cause harmful labor-force disruptions,” the order directs the Department of Labor to issue guidance delineating how the use of AI to track workers can violate federal labor law.
While the impact of this order will unfold over the next year, as agencies respond to the presidential directive, it marks a significant shift towards federal oversight of AI technology. Boards and management should monitor these developments as appropriate within the framework of risk management — and may wish to make their voices heard in the regulatory process, to the extent the initiative launched today impacts corporate priorities.
William Savitt, Mark F. Veblen, Kevin S. Schwartz, and Noah B. Yavitz are Partners and Courtney D. Hauck is an Associate at Wachtell, Lipton, Rosen & Katz LLP. This post first appeared on the firm’s blog.
The views, opinions and positions expressed within all posts are those of the author(s) alone and do not represent those of the Program on Corporate Compliance and Enforcement (PCCE) or of the New York University School of Law. PCCE makes no representations as to the accuracy, completeness and validity or any statements made on this site and will not be liable any errors, omissions or representations. The copyright or this content belongs to the author(s) and any liability with regards to infringement of intellectual property rights remains with the author(s).