by Ed Stroz and Carl Young
Recently, a malware payload (referred to as “Lightless Can”) was successfully deployed in connection with fake job offers.[1] According to researchers at ESET, the North Korean-affiliated hacking group “Lazarus” was behind this targeted phishing operation, which involved tricking victims at a Spanish aerospace company by offering a fake offer of employment at well-known firms.
Of course, there is nothing new about bad actors of all types tricking unsuspecting users into downloading malware. Phishing and pretexting, two forms of social engineering, constituted approximately 20 percent of all cyberattacks in 2022.[2] The difference here is twofold: the sophistication of the software in eluding detection and the apparent authenticity of the ruse.
Such an attack is certainly cause for concern, but it should not be much of a surprise given the historical popularity of social engineering as a mode of attack. Moreover, the trend shows no signs of abating. As recently as September 2023, Clorox, MGM Resorts International, and Caesars Entertainment all announced they were victims of cyberattacks, most likely resulting from successful social engineering campaigns.[3] Clorox experienced a 21-26 percent reduction in sales and a 25 percent drop in its share price as a direct result of this incident.
In fact, the evolutionary arc of malware sophistication has been trending upward for years.[4] However, not only is the software becoming more advanced, the scope of attacks has evolved beyond the use of email. For example, adversaries now regularly incorporate mobile and personal communication channels, e.g., texting, voicemail, into phishing campaigns, which lends credibility to their approach.
In addition, the authenticity of all forms of communication is becoming more difficult to confirm, making it nearly impossible to distinguish between friend and foe. The upshot is a significant increase in the susceptibility to social engineering. Recognizing the voice or writing style of a message sender has been a relatively reliable method of confirming identity until very recently. Yet in just the past year or two we have witnessed a remarkable improvement in the spelling, grammar, and overall writing style of garden-variety email phishing attacks. We doubt attackers have enhanced their native writing skills in such a short time frame.
Our suspicion is that the proliferation of generative artificial intelligence (AI) tools is behind this rapid evolution. Recall these methods utilize statistical models to replicate human actions such as written and verbal communication. Although large data sets are typically required to train such models, researchers at McAfee recently discovered scammers needed only three seconds of recorded audio to replicate a person’s voice using AI.[5]
Given the trajectory of information technology and the potential profits at stake, improvements in AI will only continue to accelerate. Although this technology will clearly benefit society in some areas, its widespread availability could portend badly for law-abiding individuals using the Internet. Therefore, the key question is how can average users defend themselves against the increasing sophistication of the average adversary?
In our view the answer to this question hasn’t changed with the advent of new technology. The need for vigilance and proper authentication is still of paramount importance. Specifically, any unsolicited attempt to engage in online activity over any communication channel requires verification. In other words, if you receive a request to perform an operation online that you did not initiate, no matter how realistic the request might seem, do not give in to temptation. Verify the authenticity of the message by directly contacting the purported sender before acting.
Finally, recognize that both victims and attackers in cyberspace are human, and exploiting human behavior is not a new phenomenon. Although the tools of exploitation have evolved over the millennia, the underlying modus operandi has not changed because humans are trusting by nature. In contexts where identity was easily confirmed this feature might have given our species a competitive advantage. However, in modern times it can have disastrous consequences particularly when communicating via the Internet. Social engineering exploits are rooted in the vulnerability that naturally accompanies a potentially toxic combination of unqualified trust and impulse. Although the antidote to exploitation isn’t complicated or technically difficult, it’s clear that human behavior is, and perhaps always will be, the weakest link in cybersecurity risk management.
Footnotes
[1] B. Lindrea, Crypto firms beware: Lazarus’ new malware can now bypass detection. www.cointelegraph.com, Oct 02, 2023.
[2] 2023 Verizon Data Breach Investigation Report.
[3] Fast Company (Apple News), “How old-fashioned hacking may have taken Clorox off store shelves for months,” Oct 13, 2023.
[4] https://www.cnbc.com/2023/01/07/phishing-attacks-are-increasing-and-getting-more-sophisticated.html
[5] https://www.newstribune.com/news/2023/jul/02/bbb-tips-scammers-using-ai-to-generate-voices/#:~:text=They%20can%20then%20send%20AI,and%20need%20money%20right%20away
Edward Stroz and Carl S. Young are co-founders of Consilience 360, LLC, a security consulting firm that specializes in advising boards of directors, corporate committees and corporate officers on cybersecurity risk management and governance.
The views, opinions and positions expressed within all posts are those of the author(s) alone and do not represent those of the Program on Corporate Compliance and Enforcement (PCCE) or of the New York University School of Law. PCCE makes no representations as to the accuracy, completeness and validity or any statements made on this site and will not be liable any errors, omissions or representations. The copyright or this content belongs to the author(s) and any liability with regards to infringement of intellectual property rights remains with the author(s).