Real-Time Deepfakes May Necessitate Enhancements to Wire Transfer BEC Policies

by Charu ChandrasekharLuke DemboskyAvi GesserErez LiebermannMatt Kelly and Karen Joo  

Photos of the Authors

Left to right: Charu Chandrasekhar, Luke Dembosky, Avi Gesser, Erez Liebermann, Matt Kelly and Karen Joo. (Photos courtesy of Debevoise & Plimpton LLP)

The following scenario is no longer science fiction: An employee receives an email from the CEO asking her to join a video call. The CEO directs the employee to send confidential documents to a third party. The request is unusual, but the employee saw the CEO with her own eyes, so she complies. It turns out, however, that it was a real-time deepfake and not the real CEO who gave the instructions on the video call.

We’ve previously written about business email compromise (“BEC”) and wire transfer fraud scams, and the various measures that companies can implement to reduce the associated risks. But in light of recent developments in deepfake technologies, and their increasing use as part of BECs, companies should consider revisiting their BEC mitigation strategies because some existing BEC policies may no longer be sufficient to address these emerging threats.

Deepfake technology has reached the stage where attackers are now starting to generate convincing fake audio and video in real time. If an employee genuinely (although mistakenly) believes that they are being instructed by the CEO to take a specific action (such as wire a large sum of money out of the company), they are likely to do it, even if that action is contrary to company policy. Therefore, addressing the risks of deepfake-enhanced BECs is mainly a training issue, not a policy issue, and companies should consider providing specific training on deepfake risks to employees who are responsible for transferring large sums of money or issuing new passwords for confidential accounts.

A BEC scam often involves a threat actor, using a fake or a hacked email, to impersonate a company executive or a vendor who is asking for a payment to be made, and thereby tricking a company employee into sending money to an account that the threat actor controls. The targeted employee often works in the company’s finance or accounting departments (or the IT helpdesk, where the goal is initial access, such as through a password reset to a new device).

Traditionally, BEC risks are mitigated by having policies that require independent verification. For example, employees who receive instructions to make a payment over a certain threshold (e.g., $10,000) to a new bank account may be required by policy to confirm that the instruction is legitimate by calling the person making the request (or a previously identified officer of the requesting company) using a phone number that can be independently verified as authentic (i.e., not using the phone number provided in the payment request). Some SEC registered broker-dealers and investment advisers include such measures as part of their Regulation S-ID policies.

But these policies are sometimes not effective in stopping deepfake-enhanced BECs. Suppose an employee who is responsible for making wire payments receives a text message from the CFO asking the employee to join a video call, and sending a link for the call. The person on the call looks and sounds exactly like the CFO, and tells the employee that there is a fast-moving and highly confidential company transaction that is about to get signed. Because of insider-trading risks, the employee is told to keep the transaction strictly confidential. To facilitate the transaction, the employee is instructed to immediately wire $3 million to the account of an investment bank that is allegedly involved in the transaction, but is not a usual company vendor. If employees believe that they are actually talking with the CFO, then they are unlikely under those circumstances to follow a company policy that requires them to call the CFO at a verified office number to confirm that the request is legitimate before executing it.

To address this kind of deepfake risk, companies should consider implementing the following additional measures:

Training: Employees who transfer large sums of money, or send out confidential information, or who handle HelpDesk access requests, should be made aware that deepfake technology can now create very convincing fake audio and videos in real time. Therefore, any request made by audio or video could be fraudulent, especially if it has one of the following hallmarks: (a) it is unusual, (b) it involves the transfer of large sums of money or highly sensitive information, (c) it includes a requirement to keep the request confidential or not to follow normal protocols, (d) it has an element of urgency, or (e) it involves a transfer of funds to a new bank account or confidential information to an unfamiliar email address. Training should specifically note that employees will not face any adverse action for following company verification protocols when presented with such a request, and that failing to follow verification protocols, even at the request of the CEO, could result in discipline. Training could also include ways to detect real-time video deepfakes, like asking the person to turn to the side, because the technology is not very good at generating realistic profile views.

Enhanced Verification: Any request with hallmarks identified above should be independently verified by one or more of the following means:

  • In-person verification or calling the person making the request at a phone number that can be independently verified as authentic.
  • Requiring the requester to display two pieces of identification, at least one with a photo, during the live video call.
  • Requiring the requester to provide a predetermined code word or response to a challenge question that the company has implemented for authentication of such requests.

Two-Person Approval: A second person should be required to sign off on certain categories of high-risk requests made by voicemail, email, text, audio or video.

If a business believes that it is likely the victim of a BEC, it should immediately (1) contact the banks that sent and received the money to alert them about the fraud, and (2) submit the relevant information to https://bec.ic3.gov/ in the U.S. to trigger FBI efforts to start the financial fraud kill chain (a process that the FBI has successfully used many times to freeze and recover fraudulently transferred funds).

Charu ChandrasekharLuke Dembosky, Avi Gesser, and Erez Liebermann are Partners, Matt Kelly is a Counsel, and Karen Joo is a Law Clerk at Debevoise & Plimpton LLP. This post originally appeared on the firm’s blog.

The views, opinions and positions expressed within all posts are those of the author(s) alone and do not represent those of the Program on Corporate Compliance and Enforcement (PCCE) or of the New York University School of Law. PCCE makes no representations as to the accuracy, completeness and validity or any statements made on this site and will not be liable any errors, omissions or representations. The copyright of this content belongs to the author(s) and any liability with regards to infringement of intellectual property rights remains with the author(s).