Tag Archives: Cameron Sharp

SEC’s Focus on Cyber and AI to Continue Under Trump Administration

by Andrew J. Ceresney, Charu A. Chandrasekhar, Luke Dembosky, Avi Gesser, Erez Liebermann, Julie M. Riewe, Jeff Robins, Kristin A. Snyder, and Cameron Sharp

Photos of the authors

Top left to right: Andrew J. Ceresney, Charu A. Chandrasekhar, Luke Dembosky, and Avi Gesser. Bottom left to right: Erez Liebermann, Julie M. Riewe, Jeff Robins, and Kristin A. Snyder. (Photos courtesy of Debevoise & Plimpton LLP).

On February 20, 2025, the SEC announced the creation of the Cyber and Emerging Technologies Unit (“CETU”) to focus on “combatting cyber-related misconduct and to protect retail investors from bad actors in the emerging technologies space.” In this blog post, we provide an overview of the announcement, which illustrates that the Trump administration will continue to prioritize SEC cybersecurity and artificial intelligence examinations and enforcement, with a particular emphasis on fraudulent conduct impacting retail investors.

Continue reading

Recently Enacted AI Law in Colorado: Yet Another Reason to Implement an AI Governance Program

by Avi GesserErez Liebermann, Matt KellyMartha HirstAndreas Constantine PavlouCameron Sharp, and Annabella M. Waszkiewicz

Photos of the authors.

Top left to right: Avi Gesser, Erez Liebermann, Matt Kelly, and Martha Hirst. Bottom left to right: Andreas Constantine Pavlou, Cameron Sharp, and Annabella M. Waszkiewicz. (Photos courtesy of Debevoise & Plimpton LLP)

On May 17, 2024, Colorado passed Senate Bill 24-205 (“the Colorado AI Law” or “the Law”), a broad law regulating so-called high-risk AI systems that will become effective on February 1, 2026.  The law imposes sweeping obligations on both AI system deployers and developers doing business in Colorado, including a duty of reasonable care to protect Colorado residents from any known or reasonably foreseeable risks of algorithmic discrimination.

Continue reading

The Top Eight AI Adoption Failures and How to Avoid Them

by Avi Gesser, Matt Kelly, Samuel J. Allaman, Michelle H. Bao, Anna R. Gressel, Michael Pizzi, Lex Gaillard, and Cameron Sharp

Photos of the authors

Top left to right: Avi Gesser, Matt Kelly, Samuel J. Allaman, and Michelle H. Bao.
Bottom left to right: Anna R. Gressel, Michael Pizzi, Lex Gaillard, and Cameron Sharp.
(Photos courtesy of Debevoise & Plimpton LLP)

Over the past three years, we have observed many companies in a wide range of sectors adopt Artificial Intelligence (“AI”) applications for a host of promising use cases. In some instances, however, those efforts have ended up being less valuable than anticipated—and in a few cases, were abandoned altogether—because certain risks associated with adopting AI were not properly considered or addressed before or during implementation. These risks include issues related to cybersecurity, privacy, contracting, intellectual property, data quality, business continuity, disclosure, and fairness.

In this Debevoise Data Blog post, we examine how the manifestation of these risks can lead to AI adoption “failure” and identify ways companies can mitigate these risks to achieve their goals when implementing AI applications.

Continue reading