FCC Ruling on AI-Facilitated Fraud Illustrates the Need for Forward-Looking Enterprise Risk Management

by William Savitt, Mark F. Veblen, Noah B. Yavitz, and Courtney D. Hauck

From left to right: William Savitt, Mark F. Veblen, Noah B. Yavitz, and Courtney D. Hauck (Photos courtesy of Wachtell, Lipton, Rosen & Katz)

In response to a recent boom in AI-powered robocall scams, the U.S. Federal Communications Commission announced yesterday a Declaratory Ruling confirming that the Telephone Consumer Protection Act, which regulates telemarketing and robocalls, also applies to calls using AI-generated voices. Other federal agencies and state legislatures have similarly moved to police the use and abuse of audio “deepfakes” — in which widely available tools can be used to generate realistic voice simulations from brief recordings. As technology continues to outpace regulation, boards must embrace a proactive approach to risk management, accounting for AI’s capacity to compromise long-standing practices in cybersecurity and internal controls.

Emerging AI risk will vary by industry and business. However, as predicted in a federal cybersecurity warning issued last year, companies of all stripes can expect to see AI technology used to probe for gaps in their fraud and cyber defenses. Every day brings a new report of the havoc wreaked: fraudsters using deepfakes to impersonate a corporate insider to steal tens of millions of dollars and gain access to sensitive information; criminals using AI-generated fake identification to bypass know-your-customer protections; hackers using generative AI to craft strikingly realistic phishing emails. On the other side of the ledger, cybersecurity defense firms have begun to roll out AI-enhanced defense tools, allowing for more sophisticated identification of fraud and malicious communications, vetting of incoming network traffic, and mining of publicly available data sources to detect emerging threats.

As we have discussed previously, while these applications of AI technology may be new, the challenge to corporate governance is not. AI fraud and cybersecurity issues present yet another venue for the exercise of directors’ ever-evolving duty to oversee the management of technology-driven risk. There are many ways for boards to adapt governance to meet this challenge — including, potentially, by delegating AI oversight to existing risk, audit, and/or technology committees with reports to the full board as appropriate, launching new dedicated subcommittees, and soliciting input from experts and external advisors. Regardless of the approach taken, it is essential that boards recognize the emerging threat and take thoughtful steps to oversee its management not just for today, but into the future.

William Savitt, Mark F. Veblen, and Noah B. Yavitz are Partners and Courtney D. Hauck is an Associate at Wachtell, Lipton, Rosen & Katz LLP. This post first appeared on the firm’s blog.

The views, opinions and positions expressed within all posts are those of the author(s) alone and do not represent those of the Program on Corporate Compliance and Enforcement (PCCE) or of the New York University School of Law. PCCE makes no representations as to the accuracy, completeness and validity or any statements made on this site and will not be liable any errors, omissions or representations. The copyright of this content belongs to the author(s) and any liability with regards to infringement of intellectual property rights remains with the author(s).