Category Archives: Artificial Intelligence

The EU AI Act is Officially Passed – What We Know and What’s Still Unclear

by Avi Gesser, Matt KellyRobert Maddox, and Martha Hirst 

Photos of authors.

From left to right: Avi Gesser, Matt Kelly, Robert Maddox, and Martha Hirst. (Photos courtesy of Debevoise & Plimpton LLP)

The EU AI Act (the “Act”) has made it through the EU’s legislative process and has passed into law; it will come into effect on 1 August 2024. Most of the substantive requirements will come into force two years later, from 1 August 2026, with the main exception being “Prohibited” AI systems, which will be banned from 1 February 2025.

Despite initial expectations of a sweeping and all-encompassing regulation, the final version of the Act reveals a narrower scope than some initially anticipated.

Continue reading

Treasury’s Report on AI (Part 2) – Managing AI-Specific Cybersecurity Risks in the Financial Sector

by Avi Gesser, Erez Liebermann, Matt Kelly, Jackie Dorward, and Joshua A. Goland

Photos of authors.

Top: Avi Gesser, Erez Liebermann, and Matt Kelly. Bottom: Jackie Dorward and Joshua A. Goland (Photos courtesy of Debevoise & Plimpton LLP)

This is the second post in the two-part Debevoise Data Blog series covering the U.S. Treasury Department’s report on Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector (the “Report”).

In Part 1, we addressed the Report’s coverage of the state of AI regulation and best practices recommendations for AI risk management and governance. In Part 2, we review the Report’s assessment of AI-enhanced cybersecurity risks, as well as the risks of attacks against AI systems, and offer guidance on how financial institutions can respond to both types of risks.

Continue reading

Biden Administration Releases Proposed Rule on Outbound Investments in China

by Paul D. Marquardt and Kendall Howell

Photos of authors

From left to right: Paul D. Marquardt and Kendall Howell (Photos courtesy of Davis Polk & Wardwell LLP)

The Biden administration released its proposed rule that would establish a regulatory framework for outbound investments in China, following its advanced notice of proposed rulemaking released last August.

On June 21, 2024, the U.S. Department of the Treasury (Treasury) released its long-awaited notice of proposed rulemaking that would impose controls on outbound investments in China (the Proposed Rule). The Proposed Rule follows Treasury’s advanced notice of proposed rulemaking (the ANPRM) released in August 2023 (discussed in this client update) and implements the Biden administration’s Executive Order 14105 (the Executive Order), which proposed a high-level framework to mitigate the risks to U.S. national security interests stemming from U.S. outbound investments in “countries of concern” (currently only China). Like the Executive Order and ANPRM, the Proposed Rule reflects an effort by the Biden administration to adopt a “narrow and targeted” program and is in large part directed at the “intangible benefits” of U.S. investment (e.g., management expertise, prestige, and know-how), rather than capital alone.[1]

Continue reading

CNIL Publishes New Guidelines on the Development of AI Systems

by David Dumont and Tiago Sérgio Cabral

Photos of the authors

David Dumont and Tiago Sérgio Cabral (photos courtesy of Hunton Andrews Kurth LLP)

On June 7, 2024, following a public consultation, the French Data Protection Authority (the “CNIL”) published the final version of its guidelines addressing the development of AI systems from a data protection perspective (the “Guidelines”). Read our blog on the pre-public consultation version of these Guidelines.

In the Guidelines, the CNIL states that, in its view, the successful development of AI systems can be reconciled with the challenges of protecting privacy.

Continue reading

Treasury and FSOC Sharpen Focus on Risks of AI in the Financial Sector

by Alison M. Hashmall, David Sewell, Beth George, Andrew Dockham, Megan M. Kayo and Nathaniel Balk

Photos of the authors

Top left to right: Alison M. Hashmall, David Sewell and Beth George. Bottom Left to Right: Andrew Dockham, Megan M. Kayo and Nathaniel Balk. (Photos courtesy of Freshfields Bruckhaus Deringer LLP)

On June 6-7, 2024, the Financial Stability Oversight Council (FSOC or the Council) cosponsored a conference on AI and financial stability with the Brookings Institution (the FSOC Conference).  The conference was billed as “an opportunity for the public and private sectors to convene to discuss potential systemic risks posed by AI in financial services, to explore the balance between encouraging innovation and mitigating risks, and to share insights on effective oversight of AI-related risks to financial stability.” The FSOC Conference featured noteworthy speeches by Secretary of the Treasury Janet Yellen (who chairs the Council), as well as Acting Comptroller of the Currency Michael Hsu.  And in a further sign of increased regulatory focus on AI in the financial industry, the Treasury Department also released a request for information on the Uses, Opportunities, and Risk of Artificial Intelligence (AI) in the Financial Services Sector (the AI RFI) while the conference was happening – its most recent, and most comprehensive, effort to understand how AI is being used in the financial industry.

In this blog post, we first summarize the key questions raised and topics addressed in the AI RFI.  We then summarize the key takeaways from FSOC’s conference on AI and discuss how these developments fit within the broader context of actions taken by the federal financial regulators in the AI space. Lastly, we lay out takeaways and the path ahead for financial institutions as they continue to navigate the rapid development of AI technology.

Continue reading

Recently Enacted AI Law in Colorado: Yet Another Reason to Implement an AI Governance Program

by Avi GesserErez Liebermann, Matt KellyMartha HirstAndreas Constantine PavlouCameron Sharp, and Annabella M. Waszkiewicz

Photos of the authors.

Top left to right: Avi Gesser, Erez Liebermann, Matt Kelly, and Martha Hirst. Bottom left to right: Andreas Constantine Pavlou, Cameron Sharp, and Annabella M. Waszkiewicz. (Photos courtesy of Debevoise & Plimpton LLP)

On May 17, 2024, Colorado passed Senate Bill 24-205 (“the Colorado AI Law” or “the Law”), a broad law regulating so-called high-risk AI systems that will become effective on February 1, 2026.  The law imposes sweeping obligations on both AI system deployers and developers doing business in Colorado, including a duty of reasonable care to protect Colorado residents from any known or reasonably foreseeable risks of algorithmic discrimination.

Continue reading

Succor Borne Every Minute

by Michael Atleson

Federal Trade Commission

Earnest chats with objects are not so unusual. Mark “The Bird” Fidrych, the famed Detroit Tiger, used to stand on the pitching mound whispering to the baseball. Forky, the highly animate utensil from Toy Story 4, once posed deep questions about friendship to a ceramic mug. And many of us have made repeated queries of the Magic 8 Ball despite its limited set of randomly generated answers.

Our talking to computers also goes way back, and that history is getting weirder. We’re seeing a wave of avatars and bots marketed to provide companionship, romance, therapy, or portals to dead loved ones, and even meet religious needs. It may be a function of AI companies making chatbots better at human mimicry in order to convince us that chatbots have social value worth paying for. Consider that some of these companies compare their products to magic (they aren’t), talk about the products having feelings (they don’t), or admit they just want people to feel that the products are magic or have feelings.

Continue reading

Limited-Risk AI—A Deep Dive Into Article 50 of the European Union’s AI Act

by Martin Braun, Anne Vallery, and Itsiq Benizri

Photo of the authors

Left to right: Martin Braun, Anne Vallery and Itsiq Benizri (photos courtesy of the authors)

This blog post focuses on the transparency requirements associated with certain limited-risk artificial intelligence (AI) systems under Article 50 of the European Union’s AI Act.

As explained in our previous blog post, the AI Act’s overall risk-based approach means that, depending on the level of risk, different requirements apply. In total, there are four levels of risk: (1) unacceptable risk, in which case AI systems are prohibited (see our blog post on prohibited AI practices for more details); (2) high risk, in which case AI systems are subject to extensive requirements, including regarding transparency; (3) limited risk, which triggers only transparency requirements; and (4) minimal risk, which does not trigger any obligations.

Continue reading

Treasury’s Report on AI (Part 1) – Governance and Risk Management

by Charu A. Chandrasekhar, Avi Gesser, Erez Liebermann, Matt Kelly, Johanna Skrzypczyk, Michelle Huang, Sharon Shaji, and Annabella M. Waszkiewicz

Photos of the authors

Top: Charu A. Chandrasekhar, Avi Gesser, Erez Liebermann, and Matt Kelly
Bottom: Johanna Skrzypczyk, Michelle Huang, Sharon Shaji, and Annabella M. Waszkiewicz
(Photos courtesy of Debevoise & Plimpton LLP)

On March 27, 2024, the U.S. Department of Treasury (“Treasury”) released a report on Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector (the “Report”). The Report was released in response to President Biden’s Executive Order (“EO”) 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, which spearheaded a government-wide effort to issue Artificial Intelligence (“AI”) risk management guidelines consistent with the White House’s AI principles. Continue reading

AI for IAs: How Artificial Intelligence Will Impact Investment Advisers

by Michael McDonald

Photo of the author

Photo courtesy of Davis Wright Tremaine LLP

The use of artificial intelligence and machine learning technology solutions (“AI”) is becoming increasingly common in all industries, including the registered investment adviser (“RIA”) space. A recent survey by AI platform Totumai and market research firm 8 Acre Perspective found that 12% of RIAs currently use AI technology in their businesses and 48% plan to use the technology at some point, which means there is a realistic expectation that 60% of RIAs will be using AI in the near future. Among other use-cases, AI has the potential to be used by RIAs for portfolio management, customer service, compliance, investor communications, and fraud detection. While regulators are not likely to prohibit the use of AI in the industry, they are likely to closely monitor and regulate specific applications and use cases which is why it is essential for RIAs to understand these emerging rules and regulatory frameworks so they can appropriately leverage the many benefits of AI while ensuring their business remains compliant with these new rules of the road. DWT has recently launched a series of webinars entitled, “AI Across All Industries” available here, that has gone in-depth on the legal issues surrounding the use of AI.

Continue reading