by Avi Gesser, Douglas Zolkind, Matt Kelly, Sarah Wolf, Scott J. Woods, and Karen Joo
We recently highlighted the need for companies to manage risks associated with the adoption of AI technology, including the malicious use of real-time deepfakes (i.e., AI-generated audio or video that impersonates a real person). In this article, we address three AI-related insider risks that warrant special attention by corporate compliance departments (i.e., insider deepfakes, barrier evasion, and model manipulation) and present possible ways to mitigate them.