Author Archives: Akshata Kumta

EU AI Act Will Be World’s First Comprehensive AI Law

by Beth Burgin Waller, Patrick J. Austin, and Ross Broudy

Photos of authors

Left to right: Beth Burgin Waller, Patrick J. Austin, and Ross Broudy (photos courtesy of Woods Rogers Vandeventer Black PLC)

On March 13, 2024, the European Union’s parliament formally approved the EU AI Act, making it the world’s first major set of regulatory ground rules to govern generative artificial intelligence (AI) technology. The EU AI Act, after passing final checks and receiving endorsement from the European Council, is expected to become law in spring 2024, likely May or June.

The EU AI Act will have a phased-in approach. For example, regulations governing providers of generative AI systems are expected to go into effect one year after the regulation becomes law, while prohibitions on AI systems posing an “unacceptable risk” to the health, safety, or fundamental rights of the public will go into effect six months after the implementation date. The complete set of regulations in the EU AI Act are expected to be in force by mid-2026.

Continue reading

Recent Regulatory Announcements Confirm Increased Scrutiny of “AI-Washing”

by Tami Stark, Courtney Hague AndrewsMaria Beguiristain, Joel M. Cohen, Daniel Levin, Darryl Lew, and Marietou Diouf

Photos of authors

Top (left to right): Tami Stark, Courtney Hague Andrews, Maria Beguiristain, and Joel M. Cohen
Bottom (left to right): Daniel Levin, Darryl Lew, and Marietou Diouf (Photos courtesy of White & Case LLP)

In December 2023, we published an alert concerning US Securities and Exchange Commission (“SEC”) Chair Gary Gensler’s warning to public companies against “AI washing” – that is, making unfounded claims regarding artificial intelligence (“AI”) capabilities.[1] It is no surprise that since then regulators and the US Department of Justice (“DOJ”) have repeated this threat and the SEC publicized an AI related enforcement action that typically would not get such emphasis.

In January 2024, the SEC’s Office of Investor Education and Advocacy issued a joint alert with the North American Securities Administrators Association and the Financial Industry Regulatory Authority warning investors of an increase in investment frauds involving the purported use of AI and other emerging technologies.[2] Similarly, the Commodity Futures Trading Commission Office of Customer Education and Outreach issued a customer advisory warning the public against investing in schemes touting “AI-created algorithms” that promise guaranteed or unreasonably high returns.[3]

Continue reading

Department of Commerce, Department of the Treasury, and Department of Justice Tri-Seal Compliance Note: Obligations of foreign-based persons to comply with U.S. sanctions and export control laws

by the Department of Commerce, Department of the Treasury, and Department of Justice

Photos of authors

OVERVIEW

Today’s increasingly interconnected global marketplace offers unprecedented opportunities for companies around the world to trade with the United States and one another, contributing to economic growth. At the same time, malign regimes and other bad actors may attempt to misuse the commercial and financial channels that facilitate foreign trade to acquire goods, technology, and services that risk undermining U.S. national security and foreign policy and that challenge global peace and prosperity. In response to such risks, the United States has put in place robust sanctions and export controls to restrict the ability of sanctioned actors to misuse the U.S. financial and commercial system in advance of malign activities.

These measures can create legal exposure not only for U.S. persons, but also for non-U.S. companies who continue to engage with sanctioned jurisdictions or persons in violation of applicable laws. To mitigate the risks of non-compliance, companies outside of the United States should be aware of how their activities may implicate U.S. sanctions and export control laws. This Note highlights the applicability of U.S. sanctions and export control laws to persons and entities located abroad, as well as the enforcement mechanisms that are available for the U.S. government to hold non-U.S. persons accountable for violations of such laws, including criminal prosecution. It further provides an overview of compliance considerations for non-U.S. companies and compliance measures to help mitigate their risk.

Continue reading

State Governments Move to Regulate AI in 2024

by Louis W. Tompros, Arianna Evers, Eric P. Lesser, Allie Talus, and Lauren V. Valledor

Photos of authors

(Left to right) Louis W. Tompros, Arianna Evers, Eric P. Lesser, Allie Talus, and Lauren V. Valledor (Photos courtesy of Wilmer Cutler Pickering Hale and Dorr LLP)

Recently, New York Governor Kathy Hochul proposed sweeping artificial intelligence (AI) regulatory measures intended to protect against untrustworthy and fraudulent uses of AI. Presented as part of her FY 2025 Executive Budget, the bill would amend existing penal, civil rights and election laws—establishing a private right of action for voters and candidates impacted by deceptive AI-generated election materials and criminalizing certain AI uses, among other measures. Governor Hochul’s proposals are part of a wider trend of governors and state lawmakers taking more expansive measures to regulate AI that deserve attention from businesses developing and using AI.

Continue reading

Commerce Department Proposes Cybersecurity/AI Reporting and “KYC” Requirements for Certain Cloud Providers

by Robert Stankey, K.C. Halm, Michael T. Borgia, Andrew M. Lewis, and Assaf Ariely

Photos of authors

Left to right: Robert Stankey, K.C. Halm, Michael T. Borgia, Andrew M. Lewis, and Assaf Ariely (photos courtesy of Davis Wright Tremaine LLP)

IaaS providers would need to verify foreign users’ identities (aka “know your customer”) and report certain AI model training activities under the proposed rules

The U.S. Department of Commerce’s (“Commerce”) Bureau of Industry and Security (“BIS”) has issued a proposed rule (the “Proposed Rule”) that would impose significant diligence, reporting, and recordkeeping requirements on U.S. providers of Infrastructure as a Service (IaaS) and their foreign resellers. IaaS is generally considered to be a cloud computing model that provides users with remote access to servers, storage, networking, and virtualization.

The Proposed Rule would require U.S. IaaS providers to:

  • Implement and maintain a “Customer Identification Program” (CIP), which must include detailed know-your-customer (KYC) procedures for identifying and reporting foreign customers to Commerce; and
  • Report transactions involving foreign persons that “could result in the training of a large AI model with potential capabilities that could be used in malicious cyber-enabled activity.”

Continue reading

U.S. Cybersecurity and Data Privacy Outlook and Review – 2024

by Alexander H. Southwell and Snezhana Stadnik Tapia

Photos of authors

From left to right: Alexander H. Southwell and Snezhana Stadnik Tapia (photos courtesy of Gibson, Dunn & Crutcher LLP)

As with previous years, the privacy and cybersecurity landscape continued to evolve substantially over the course of 2023. We recently provided a review of some of the most significant developments on this topic in the U.S. in the eleventh edition of Gibson Dunn’s U.S. Cybersecurity and Data Privacy Outlook and Review.

Below we summarize the past year’s developments and future prospects, including the wave of new privacy and cyber legal and regulatory advances at the federal and state levels. This past year, states continued to take the lead on enacting privacy legislation and branches of the federal government focused on data security, sensitive data, and artificial intelligence (“AI”). The surge of civil litigation with respect to web-tracking technologies also endured. In 2024, we expect an amplified focus on privacy and cybersecurity issues, as well as with respect to emerging technologies such as AI, to continue.

Continue reading

Navigating Compliance Risks in Robotics Applications within EU and US Legal Frameworks

by Wanda R. Lopuch Ph.D

New Technologies in the European Union and the United States

Photo of author

(Photo courtesy of author)

In the realm of technological innovation, robotics stands out due to its rapid growth and transformative potential. However, this potential brings myriad compliance risks, particularly when navigating the complex legal landscapes of the European Union (EU) and the United States (US). Below, I explore these risks, focusing on the divergent legal frameworks of the EU and the US and the challenges they pose to robotics application.

Continue reading

Court Dismisses Most Claims in Authors’ Lawsuit Against OpenAI

by Angela Dunning, Arminda Bepko, and Jessica Graham

Photos of authors

From left to right: Angela Dunning, Arminda Bepko & Jessica Graham (photos courtesy of Cleary Gottlieb Steen & Hamilton LLP)

This week saw yet another California federal court dismiss copyright and related claims arising out of the training and output of a generative AI model in Tremblay v. OpenAI, Inc.,[1] a putative class action filed on behalf of a group of authors alleging that OpenAI infringed their copyrighted literary works by using them to train ChatGPT.[2] OpenAI moved to dismiss all claims against it, save the claim for direct copyright infringement, and the court largely sided with OpenAI.

Continue reading

DOJ Announces Initiative to Combat AI-Assisted Crime

by Helen V. Cantwell, Andrew J. Ceresney, Avi Gesser, Andrew M. Levine, David A. O’Neil, Winston M. Paes, Jane Shvets, Bruce E. Yannett, and Douglas S. Zolkind

photos of the authors

Top (left to right): Helen V. Cantwell, Andrew J. Ceresney, Avi Gesser, Andrew M. Levine, and David A. O’Neil
Bottom (left to right): Winston M. Paes, Jane Shvets, Bruce E. Yannett, and Douglas S. Zolkind (photos courtesy of Debevoise & Plimpton LLP)

On February 14, 2024, Deputy Attorney General Lisa O. Monaco announced an initiative within the U.S. Department of Justice to ramp up the detection and prosecution of crimes perpetrated through artificial intelligence (AI) technology, including seeking harsher sentences for certain AI-assisted crimes. Monaco also announced a new effort to evaluate how the Department can best use AI internally to advance its mission while guarding against AI risks.

Continue reading

FinCEN Proposes Highly Anticipated Investment Adviser AML/CFT Rule

by David Sewell, Timothy Clark, Stephanie Brown-Cripps, Nathaniel Balk, Nathalie Kupfer, and Rosie Jiang

Photos of authors

Top (left to right): David Sewell, Timothy Clark, and Stephanie Brown-Cripps
Bottom (left to right): Nathaniel Balk, Nathalie Kupfer, and Rosie Jiang
(Photos courtesy of Freshfields Bruckhaus Deringer LLP)

On February 13, 2024, the U.S Treasury Department’s Financial Crimes Enforcement Network (FinCEN) issued a proposed rule to extend anti-money laundering (AML) and countering the financing of terrorism (CFT) compliance obligations to certain types of investment advisers operating in the United States (Proposed Rule).[1]  The agency simultaneously released a “2024 Investment Adviser Risk Assessment” (Risk Assessment), its first comprehensive effort to describe and measure “illicit finance threats involving investment advisers.”[2]

FinCEN’s release marks the latest development in a decades-old debate about whether investment advisers should be subject to the Bank Secrecy Act (BSA) and the attendant AML/CFT requirements that have long been applied to banks, broker-dealers, and other financial institutions.  If adopted in the current (or a similar) form, the Proposed Rule would bring this long-running debate to a close once and for all.  

Below, we briefly summarize the Proposed Rule, including its scope, requirements and potential implications, and highlight open questions and next steps.  

Continue reading