Search

AI’s Double-Edged Sword: How MSPs Can Help Clients Harness Innovation While Battling AI-Driven Threats

This article was written by Chris Henderson, chief information security officer at Huntress.

Generative AI has rapidly moved from buzzword to business reality, promising transformative benefits from faster threat detection to smarter automation. But alongside these advancements, attackers are weaponizing the same tools to launch more convincing scams, deepfakes, and phishing campaigns.

To make sense of this dual-edged reality, MSPs must educate their clients on ways that AI can bolster security strategies while also teaching them to spot common AI-enabled threats.

Common AI-enabled Attacks and What to Look For

Threat actors use generative AI in a variety of ways to create more convincing attacks and deceive their targets. These can range from AI-generated robocalls to “pig butchering scams”—online fraud that combines elements of social engineering, romance scams, and investment fraud. (The term comes from the idea of “fattening up” the victim before “butchering” them.) 

Here are three common ways that we see AI used by threat actors in the wild and how you can help your clients protect against them: 

1. Deepfakes

AI-generated deepfakes have become scarily accurate, and threat actors are using that to their advantage in social engineering attacks against businesses.

For example, Huntress recently observed an incident where an employee at a cryptocurrency foundation joined a Zoom meeting with several deepfakes of known senior leadership within their company. The deepfakes told the employee to download a Zoom extension to access the Zoom microphone, paving the way for a North Korean intrusion. (For more on this incident, see Deepfake Video Calls Pose New Cyberthreat: What MSPs Must Do to Protect Clients.)

For businesses, these types of scams are turning existing verification processes upside down. It’s important for MSPs to help their clients understand the red flags to look for with deepfakes, such as facial inconsistencies, long silences, or strange lighting. MSPs should also help their customers develop clear, actionable plans for employees who find themselves face-to-face with a deepfake scam.

2. Phishing Emails

Attackers are using AI to craft more convincing phishing emails, which is especially beneficial if they don’t speak English as their first language. This also helps threat actors avoid many of the phishing indicators that employees have become accustomed to looking for, like spelling errors or questionable grammar. Threat actors are also integrating AI tools into their phishing kits as a way to take landing pages or emails and translate them into other languages. This can help threat actors scale their phishing campaigns.

It’s important to recognize, however, that many of the same security measures still apply to AI-generated phishing content. For instance, MSPs should encourage clients to enable multifactor authentication (MFA) to mitigate the impact of phishing attacks. MSPs should also continue to advise their clients to provide security awareness training for employees and to be wary of messages that express urgency. 

3. Fake AI Tools

In true threat actor fashion, attackers are riding on the popularity of AI as a way to trick people into downloading malware. We frequently see threat actors tailoring their lures and customizing their attacks to take advantage of popular current events or even seasonal fads like Black Friday. So attackers using things like malicious “AI video generator” websites or fake malware-laden AI tools doesn’t come as a surprise. For instance, a TikTok account was reportedly posting videos of ways to install “cracked software” to bypass licensing or activation requirements for apps like ChatGPT through a PowerShell command. But in reality, the account was operating a malware distribution campaign, which was later exposed by researchers. 

Security awareness training is key for businesses here, too, and MSPs should advise clients to block employees from downloading tools that haven’t been vetted.

Adding AI to the Defenders’ Toolbox 

AI isn’t a gamechanger for threat actors by any means yet. As the examples above show, many of the same defenses that businesses currently use can help protect against malicious AI use cases. For defenders, however, AI-enabled tools come with important threat detection, analysis, and context enrichment benefits that can help speed up workflows.

For example, AI can help analysts with context enrichment during detection and response. When analysts look at a given signal or telemetry, they currently use various tools to gain all the contextual information that they need to help determine if a threat is malicious. AI can help speed up this process by adding additional context more efficiently. This could ultimately help lower the mean-time-to-decision for analysts reviewing signals.

Businesses grappling with governance, risk, and compliance can also use AI in several ways. For instance, AI can be useful for businesses looking to measure the efficacy of their security programs’ controls. AI tools, with human oversight, can also help security teams work through vendor diligence questionnaires as they go through risk evaluations with clients they’re trying to bring on.

That being said, human oversight of AI tools is still very much needed. AI-enabled tools are not yet solid at binary decision-making (specifically, addressing questions like “is this malicious or not?”), and they’re still prone to challenges like prompt injection (attackers inject malicious or manipulative text into the prompt to alter the AI’s behavior) or hallucinations (the AI model produces information that is either false, illogical, or completely made up).

How MSPs Can Help Clients Navigate the Future of AI

AI can help make workflows more efficient and quicker for threat detection, compliance, and other important aspects of a security professional’s role. 

Unfortunately, threat actors are also cashing in on the AI craze. While attackers’ current use of AI-enabled tools is mainly limited to crafting phishing messages or touting malware-packed fake AI tools, we’re set to see threat actors continue to develop and innovate the ways that they incorporate AI into their workflows.

Going forward, MSPs have a critical role to play in helping their clients protect themselves against an increase in AI-powered threats.

If you missed Henderson’s last column, see Inside the Scattered Spider Retail Attacks—And What MSPs Must Do Now to Protect Clients.

Share:

Author:

Chris Henderson

Chris Henderson is the chief information security officer at Huntress. He has been securing MSPs and their clients for over 10 years through various roles in software quality assurance, business intelligence, and information security. Huntress.com

RELATED ARTICLES

Get The #1 Media Source For MSPs!
Thousands Of MSPs Trust
MSP Success
For The Best Industry News, Trends And Business Growth Strategies. Subscribe now!
 

Upcoming Events

Stay Up To Date

Thousands Of MSPs Trust
MSP Success
For The Best Industry News, Trends and Business Growth Strategies

Never Miss An Update