by Tom Melvin, Rich Kando, and Kevin Madura

Left to right: Tom Melvin, Rich Kando, and Kevin Madura (photos courtesy of AlixPartners LLP)
Today’s most-concerning corporate romance is not on Coldplay’s kiss cam. Artificial-intelligence (AI)-enabled document creation, synthetic IDs, face swapping, and impersonated voice overlays have made online scams more dangerous and more ubiquitous than ever. Armed with those new tools, scammers once used them primarily to defraud individuals, with an estimated loss of $75 billion[1] is targeting corporate bank accounts and data repositories. Enter the corporate romance scam as a direct threat to two of a company’s most highly valuable assets: cash and data.
This article explores the ways technology has served to supercharge scamming; it lists the basic steps in online scams; and it describes how companies are now being targeted in ways similar to the targeting of individuals. Additionally, the article specifies how companies can help mitigate the risks of falling victim to a scam and suffering catastrophic losses by reworking old safeguards, by using technology where it can, and by going back to basics to defend themselves from these new kinds of attacks.
The AI Boom for Online Scams
AI has hyper-charged romance scams and provided the criminals behind them with an array of new opportunities to run old and new fraud schemes with greater precision, at lightning speed, and at a fraction of the cost. We are entering the era of high-frequency fraud.
This past July, entrepreneur Sam Altman explained to an audience at a Federal Reserve Bank conference that “AI has fully defeated most of the ways that people authenticate,” and he warned of a “significant, impending fraud crisis.”[2] Similarly, the United Nations monitors underground marketplaces and has reported that mentions of deepfakes and other AI-generated content used in fraud schemes and to bypass know-your-customer measures increased 600% from February to April 2024.[3]
AI can easily be used to generate fake documents and profiles, but that’s only the beginning. Deep-fake video and voice cloning can be used for creating a fake person or to have a victim correspond with a simulated version of a real person the victim knows and trusts. AI systems can autonomously create profiles, posts, and videos for the scammer and can screen the same social media sites to perform social engineering on potential victims. AI chatbots can even be used to conduct the outreach such that a real person might not even need to get involved until the target has been well-primed.
The flourishing “Crime-as-a-Service” economy enables these innovative uses of AI for fraud purposes to spread quickly among experienced and aspiring scammers alike. Criminals can browse endless pages on online marketplaces and the dark web with the AI code they need for sale alongside other necessities like prewritten scripts and stolen bank information. Scammers can even sort through job postings from experienced AI models to assist with the fraud. These networks are much more nimble, more well organized, and borderless.
Anatomy of a Romance Scam: Targeting an Individual
Online scams for defrauding individuals typically involve the following four steps.
Step One: Initial Contact
A scam often begins with a cryptic text from an unknown number and says something like, “I won’t be able to make the meeting tomorrow” or “Do we have everything in order for the next event?” The target responds that the sender has a wrong number, which gets followed by the scammer’s polite apology. The response opens up the chain to further messaging: “So sorry. My assistant put in the wrong number. You seem like a kind person” or “My mistake. Thank you for understanding. What are you up to today?”
Many scammers take a fishing net approach, whereby they simply send as many of these messages as possible and wait for a response to pursue. Other scammers work as hunters by using social engineering or know-your-victim online research to gain key data points about the victim: profession, location, hobbies.
The messages are casual and nonthreatening. The person on the other end has an answer for everything. The person explains that they have a local number but are stationed in another country for a new job. They would prefer to switch to an encrypted app because they should not be talking about personal topics on a work device. The person messaging does not ask for anything—just someone to talk to.
Step Two: Gaining Trust
If the target is responsive, then the conversation continues for weeks or months as the scammer builds trust.
The scam could begin in the form of friendly interactions that typically shift to romantic overtures. The back-and-forth appears organic, even vulnerable, but is meticulously planned and rehearsed so it is fit for purpose. Scammers frequent online markets dedicated to selling the techniques and tools for these frauds: scripts, photos, videos, even services that will send gifts of flowers and chocolates to the targets.
A target could also be lucky enough to video chat with this online paramour. In this case, the target may be chatting with an entirely fake person generated and powered by AI.
Step Three: The Ask
Once the scammer has established trust, the scammer deftly pivots the conversation.
In early manifestations of the scams, the requests were straightforward and relied on emotions like empathy and love. The scammer would request cash for an unpaid medical expense, funds for a delinquent tax issue, or money for a travel visa or airplane ticket for a promised in-person visit. Payment methods would always prioritize anonymity, with the scammer asking for items like gift cards or money orders, but soon cryptocurrency transfers took over the space.
In recent years, however, scammers have not been satisfied with one-off thefts based solely on personal trust or romantic affinity. The newer scams exploit greed, capitalizing on the target’s need or want to grow rich.
First, the scammer explains how they themselves or a close friend has acquired incredible wealth through online trading and investing. At this time, they often send a screen grab of an account showing large profits over a short period. Next, the target might be directed to open an account at a well-established cryptocurrency exchange to convert their cash to cryptocurrency. Last, that cryptocurrency gets transferred to what appears as a brokerage account or for a crypto investment.
In the scam, the victim will begin with small amounts, and the scammer will show the victim huge returns, prompting the victim to send further funds to avoid missing out on the lucrative opportunity. In reality, there are no returns, the account is entirely fake, and the transfers are going to the scammer’s wallet.
Step Four: The Kill
The final step is the kill, wherein the target realizes something is wrong, and the scammer stops communicating.
The unravelling can start when the victim requests a payout or wants to make a withdrawal. In an effort to bleed as much money as possible from the victim before the tap gets shut off, the scammer might begin throwing up roadblocks, each with a new cost for removal: a 10% fee needed for a withdrawal, foreign taxes of $10,000, or a deposit needed into a new wallet prior to transfer.
This phase might also introduce a new correspondent, or killer, who is expert at draining the victim through intimidation or threats. In this final chapter, the primary emotion being exploited is fear. The victim is scared — scared they will never see their life savings again and scared of the inevitable embarrassment that will follow once everyone knows they fell for this ridiculous scam. In the end, the scammer and the money are gone.
Corporations as the New Targets
Scammers look for access to the deepest pockets, and companies are the obvious marks. Businesses rely on employees and contractors for critical, sensitive services; many employees work remotely; and some employees are paid modestly. Nearly all employees have some social media activity, and many have an extensive online presence that creates a road map for scammers to research their targets’ employers and potential access. Many employees bring their own device to work or operate on home networks, increasing the surface area for communication and compromise.
Depending on the scammer’s methods, employees may be willing, unwilling, or unwitting accomplices to fraud schemes. The following scenarios are examples of each potential situation and how the methods used for scams on individuals can be readily repurposed to defraud companies.
A. Bad employee from the beginning: Job applicants intending to defraud the company
In some instances, employees may be working directly with the scammers from the start. Where fake profiles and personalities pervade the romance scam world, here they can be used to create ideal job candidates. AI can assist by generating phony documents and having deepfakes conduct the interview over Zoom.
For example, North Korea’s sanctions evading tactics have taken on an entirely different bent in today’s hybrid and remote working environments. What started a few years ago has become a multimillion-dollar scheme in which North Korean workers secure high-paying remote roles at some of America’s largest companies, especially among IT and cybersecurity vendors, often stealing identities to do so.[4]
B. Good employee breaks bad: Trusted employee begins to work for or with criminal enterprise to defraud company
In other situations, the scammer recruits or coerces an employee to be part of the fraud. The same levers that get a victim to part with their money can be pulled to make an employee give up critical company information.
For example, an American crypto exchange with millions of users relies on overseas customer support agents. While ample controls exist to protect customer data, one of the contractors appears to have been bought by the local criminal enterprise and took pictures of customer data in an attempt to extort the exchange.[5] If they had used smart-glasses with embedded cameras that allow for online streaming the data theft could have been even worse.
The employee may be in dire financial circumstances that made them conspire with the scammer for needed extra cash; however, the scammer may also be extorting the employee based on embarrassing information or locked up personal funds through a separate fraud scheme. No matter the technique, the company and its customers are put at serious risk.
C. Good employee gets tricked: Trusted employee gets tricked by impersonation scam and permits a third party to defraud company
An employee could simply fall victim to an impersonation scheme, made more dangerous through the use of AI deep-fake technology. The romance scams consistently rely on deepfakes to mimic adoring love interests and their trusted investment advisors, but in this case, AI allows for video calls with the CEO, chief financial officer (CFO) or chief legal officer. AI can also generate the fraudulent documentation and help locate a target via social engineering that screens for job title and experience level.[6]
For example, scammers stole $25 million from the Hong Kong office of British engineering firm Arup in May 2024. After being instructed to complete a “secret” transaction by someone claiming to be the CFO, the employee participated in a video call with deepfakes of the CFO and several other senior managers.[7]
Companies Must Protect Themselves
These are not traditional phishing attempts or other cyber intrusions used to gain access to the company systems. These attacks recruit the individual and then use the person’s access against the company. The scams are therefore less susceptible to cybersecurity protections and harder to detect because it appears that an authorized employee is conducting authorized activity.
What’s more, companies may often be on their own in fending off these attacks. Federal authorities are typically best equipped to investigate these high-tech, lightning-speed, cross-border crimes, but law enforcement priorities are shifting, and government efficiency initiatives are driving cuts. The US Department of Justice’s (DOJ’s) proposed 2026 budget shows a $645.8-million drop in funding for law enforcement and national security capabilities, with specific reductions within the FBI’s Cyber and International Operation Divisions.[8] Likewise, the Federal Trade Commission’s (FTC’s) 2026 budget shows a decrease of $42.1 million, $18 million of which will come from consumer protection programs.[9] The recently passed One Big Beautiful Bill reduced the amount of money available to operate the Consumer Financial Protection Bureau by almost half.[10]
At the same time that enforcement may be waning, the frequency of these scams and the ensuing financial losses have been growing. The FTC stated that consumers reported losing $12.5 billion to fraud in 2024, which was a 25% increase from the prior year.[11] Similarly, the FBI estimated losses of $16 billion from internet crime in 2024, a 33% increase from 2023.[12]
Questions Companies Should Ask About Their Safeguards
This indicates that prevention and timely detection should be priorities. A company may have to retool risk management areas around items like background screening, data monitoring, employee training, and whistleblower hotlines after conducting a risk assessment.
As a first step, a company should determine—through a formal risk assessment—where and how these risks can materialize. Because employees have disparate levels of access, hiring protocols and postemployment controls cannot be one-size-fits-all. As part of the risk assessment, a company must dive deep into people risk by asking questions like, What data do employees have access to, and can they initiate payments?
Based on the results of the risk assessment, safeguard enhancements may relate to:
- Employee training: Does employee training include guidance on how to spot common scams or the use of deepfakes (such as asking a remote interviewee to place his hand in front of his face to interrupt the deep-fake technology)? Have targeted modules been deployed for employees with access to critical data and who can initiate financial transactions? Has the company been explicit with its employees that nonretaliation assurances extend to reports of being tricked or extorted by a scammer?
- Hiring red flags: How can the background process be enhanced beyond just verifying the information provided by the applicant? Do hiring teams check nontraditional red flags like brand-new email addresses and social media accounts? Should the failure to find certain information otherwise expected in context (e.g., a younger professional with no social media presence) trigger additional review? For some roles, should deeper dives or more regular background checks be considered?
- Layering payment controls: On special projects or for nonrecurring payments, are there separate protocols to ensure the payments are authorized, such as the use of corporate email for communication or required project passwords?
- Additional data monitoring: Do companies leverage existing data to monitor for patterns and anomalies consistent with known scam behaviors? Do internal teams have the tools they need to understand the flow of data within their organization?
- Remote working controls: Have companies reexamined controls for remote workers? Have they done the same for their offshore business partners? Have cybersecurity controls been fortified with more-traditional roadblocks to fraud like worksite requirements where it makes sense?
Conclusion
Just as individuals can face financial ruin from these scams, corporate victims likewise risk devastating losses of money, data, and reputation. The time-tested methods of these online scams are not evolving just to better attack companies; they are being augmented by AI and then optimized for corporate vulnerabilities. And as the scams become more common and more dangerous, companies must become proactive in fighting back and staying several steps ahead of the scammers.
Footnotes
[1] The University of Texas at Austin – McCombs School of Business, Romance Fraud (October 3, 2024) available at https://news.mccombs.utexas.edu/faculty-news/romance-fraud/.
[2] Federal Reserve, Fireside Chat with Vice Chair for Supervision Bowman (July 22, 2025) available at https://www.youtube.com/watch?v=sIEVV1mH9iY.
[3] United Nations Office on Drugs and Crime, Transnational Organized Crime and the Convergence of Cyber-Enabled Fraud, Underground Banking and Technological Innovation in Southeast Asia: A Shifting Threat Landscape (October 2024) available at https://www.unodc.org/roseap/uploads/documents/Publications/2024/TOC_Convergence_Report_2024.pdf.
[4] US Department of Justice press release: Justice Department Announces Coordinated, Nationwide Actions to Combat North Korean Remote Information Technology Workers’ Illicit Revenue Generation Schemes (June 30, 2025) available at https://www.justice.gov/opa/pr/justice-department-announces-coordinated-nationwide-actions-combat-north-korean-remote; Federal Bureau of Investigation, Public Service Announcement I-012325-PSA: North Korean IT Workers Conducting Data Extortion (January 23, 2025) available at https://www.ic3.gov/PSA/2025/PSA250123.
[5] Coinbase Blog, Protecting Our Customers – Standing Up to Extortionists (May 15, 2025) available at https://www.coinbase.com/blog/protecting-our-customers-standing-up-to-extortionists
[6] Federal Bureau of Investigation, Public Service Announcement I-081325-PSA: Fictitious Law Firms Targeting Cryptocurrency Scam Victims Combine Multiple Exploitation Tactics While Offering to Recover Funds (August 13, 2025) available at https://www.ic3.gov/PSA/2025/PSA250813.
[7] Inc., How a $25 Million Deepfake Scam Reveals the Dramatic Stakes of Cybercrime in the Era of AI (July 1, 2025) available at https://www.inc.com/shumanghosemajumder/how-a-25-million-deepfake-scam-reveals-the-dramatic-stakes-of-cybercrime-in-the-era-of-ai/91186686
[8] U.S. Department of Justice, Fiscal Year 2026 Budget and Performance Summary (June 13, 2025) at 134 available at https://www.justice.gov/media/1403736/dl.
[9] Federal Trade Commission, Congressional Budget Justification Fiscal Year 2026 (May 30, 2025) at 8 available at https://www.ftc.gov/system/files/ftc_gov/pdf/fy-2026-cbj.pdf.
[10] H. Res. 1, 119th Congress, 1st Session, Title III, Sec. 30001 available at https://www.congress.gov/bill/119th-congress/house-bill/1/text. (“Section 1017(a)(2)(A)(iii) of the Consumer Financial Protection Act of 2010 (12 U.S.C. 5497(a)(2)(A)(iii)) is amended by striking “12” and inserting “6.5”.”)
[11] Federal Trade Commission, New FTC Data Show a Big Jump in Reported Losses to Fraud to $12.5 Billion in 2024 (March 10, 2025) available at https://www.ftc.gov/news-events/news/press-releases/2025/03/new-ftc-data-show-big-jump-reported-losses-fraud-125-billion-2024.
[12] Federal Bureau of Investigation, Internet Crime Report 2024 at 4 available at https://www.ic3.gov/AnnualReport/Reports/2024_IC3Report.pdf.
Tom Melvin is a Partner, Rich Kando is a Partner and Managing Director, and Kevin Madura is a Director at AlixPartners LLP.
The views, opinions and positions expressed within all posts are those of the author(s) alone and do not represent those of the Program on Corporate Compliance and Enforcement (PCCE) or of the New York University School of Law. PCCE makes no representations as to the accuracy, completeness and validity or any statements made on this site and will not be liable any errors, omissions or representations. The copyright of this content belongs to the author(s) and any liability with regards to infringement of intellectual property rights remains with the author(s).
