Every best practice MSPs have trained their customers in when it comes to spotting and not falling for phishing is being upended by generative AI-enhanced social engineering.
There was a lot of initial fear that generative AI was going to enable the automatic creation of malware, making an already saturated market of threat actors even more approachable for newcomers. While there are certainly cases of generative AI being used to assist in writing malware, it is still prone to error and largely requires a knowledgeable user to finish the work.
AI does excel at enabling certain types of threats, however, and bad actors are using it to advance their tech kits. Specifically, generative AI has made it trivial to craft a social engineering attack. Threat actors no longer need to worry about typos or grammatical errors due to writing in a foreign language. The models used to train generative AI tools consume publicly available information, which includes marketing material. This enables threat actors to ask generative AI to craft phishing emails indistinguishable from the brand they are imitating. Pixel-perfect, grammatically correct phishing emails were historically reserved for targeted attacks, but they are now commonplace, rendering much of the prior training ineffective.
The trend of attackers moving to an identity-first approach is evident in the recent news headlines. MGM, Change Healthcare, and others all were breached due to lapses in human judgment that were carefully crafted attacks by threat actors. AI has made these attacks easier to craft and as a result, MSPs need to adopt social engineering training that moves from spotting typos to noticing tactics.
Fear- or Reward-based Tactics
It’s no secret that threat actors utilize psychological vulnerabilities in order to spike an adrenal response. When an employee’s fight or flight response kicks in, they are less likely to critically think about the current situation. Social engineers use many forms of psychological attacks but the most common lures in broad spectrum attacks are fear- or reward-based.
Fear-based social engineering tactics prey on the desire every human has to stay out of trouble. These emails, calls, or text messages will often threaten monetary harm if a link is not engaged quickly. Not only are the attackers leveraging fear against your customer’s employees but also creating a sense of urgency in hopes the victim will not stop to critically think about the message. Talk with customers about why their employees need to receive training to spot the aggressive nature of the messaging.
Reward-based social engineering tactics do what the name suggests—they offer a promise of a reward for engaging with a link. Often this will take the form of an offer that includes a gift card, but offering information as a reward is also common. Here, attackers are hoping greed takes over so that individuals’ critical thinking does not engage to question the legitimacy of the message. Again, businesses need to train employees to spot these social engineering tactics.
Why Simulation Exercises Are Critical
Training a workforce resilient to social engineering requires multiple approaches. Regular reminders of the tactics being used are certainly a key component but arguably more important is regular exposure to actual social engineering attacks. Phishing and other social engineering simulations are critical in ensuring your clients and their employees are ready when the attack inevitably comes. These exercises need to be difficult and mimic real-life tactics.
Encourage your customers to not punish individual failure of simulations. Allowing people to fail a phishing simulation in a safe, supportive environment will encourage self-discovery and cause the employee to be more vigilant in their day-to-day engagements with technology. Similarly, urge your customers to celebrate the individuals who report the simulation through the proper channels. By encouraging and rewarding early reporting by staff, it allows their best social engineering detectors to protect others thanks to fast response times.