by Avi Gesser, Douglas Zolkind, Matt Kelly, Sarah Wolf, Scott J. Woods, and Karen Joo

Top left to right: Avi Gesser, Douglas Zolkind, and Matt Kelly.
Bottom left to right: Sarah Wolf, Scott J. Woods, and Karen Joo. (Photos courtesy of Debevoise & Plimpton LLP).
We recently highlighted the need for companies to manage risks associated with the adoption of AI technology, including the malicious use of real-time deepfakes (i.e., AI-generated audio or video that impersonates a real person). In this article, we address three AI-related insider risks that warrant special attention by corporate compliance departments (i.e., insider deepfakes, barrier evasion, and model manipulation) and present possible ways to mitigate them.