Skip to content

How AI is Being Weaponised to Bypass Email Security Filters

16 December 2024

Artificial Intelligence (AI) is reshaping the world of cybersecurity faster than we could have imagined. It brings revolutionary benefits to threat detection and data security, it can analyse data and react to trends at lightning speed, however AI, particularly the intelligence side is based on knowledge, if AI doesn’t know about something then it guesses, and it doesn’t always get it right.

A publication by Google suggests at 68% of phishing attempts have never been seen and the average phishing attempt lasts only 12 minutes before its recycled.

Now AI is fighting AI as Cybercriminals are now weaponising AI to bypass traditional security measures—like spam filters—with alarming sophistication.

For businesses and IT professionals, the stakes are higher than ever. Over 90% of cyberattacks originate from external email sources, with common attack methods like phishing, scamming, and domain impersonation. This new wave of AI-driven threats highlights the urgent need for better, more secure strategies. Enter the zero-trust approach.

The Weaponisation of AI in Cybersecurity

AI vs. Traditional Spam Filters

Spam filters have long been a standard defence against email-borne threats, but attackers have found ways to outsmart them. Spam filters, which rely heavily on static rules, machine learning (ML), and AI based knowledge, they are no match for the adaptive capabilities of AI-powered attacks producing new automated content and sending millions of emails every day.

Anecdotally even marketing influencers are producing videos on YouTube of how to use AI to scrape the internet for contact emails and then create content that is more likely to land in the inbox.

AI helps attackers create emails that mimic human writing styles, strategically fooling spam filters. This includes:

  • Phishing Emails that appear highly personalised and credible.
  • Spear Phishing Attacks targeting specific individuals with tailored content.
  • Spoofing & Impersonation to hijack the identities of trusted senders.

Sophistication & Automation

Using advanced algorithms, cybercriminals are automating the process of bypassing security mechanisms. For example:

  • AI-generated text can craft grammatically flawless, natural-sounding emails in seconds.
  • Attackers use AI to analyse behavioural patterns, making it easier to target victims with high accuracy.

This new automation makes attacks quicker and more efficient than ever, overwhelming conventional spam filters.

Why Traditional Security Models Are Failing

Traditional cybersecurity operates like a castle with a moat—designed to keep external threats out. But what happens when attackers use AI to disguise themselves as trusted users? Spam filters and perimeter-based defences often fail because:

  • They rely on fixed rules that smart attackers can adapt to and bypass.
  • Machine learning models may incorrectly flag legitimate important or urgent emails or, worse, fail to detect threats that can result in a breach

Essentially, the old model of “trust but verify” no longer works. Businesses need a new way to look at security—one that doesn't rely on assumptions about what’s safe and what isn’t.

A recent publication by the UK Government National Cyber Security Centre showed a simulated phishing attempt of 1800 emails sent infected with malware, 50 of them reached the inbox, 1 was installed. That’s an 1800:1 strike rate, considering Microsoft recently published that over 350bn emails per day are sent globally and 60bn of those are spam related its easy to see why cybercriminals use email as their chosen delivery platform.

Sources

https://security.googleblog.com/2019/08/understanding-why-phishing-attacks-are.html

https://www2.deloitte.com/my/en/pages/risk/articles/91-percent-of-all-cyber-attacks-begin-with-a-phishing-email-to-an-unexpected-victim.html

https://www.ncsc.gov.uk/guidance/phishing

Photo by Agung Raharja on Unsplash