
Artificial intelligence (AI) has been the hottest topic in technology for the past few years now, with a focus on how it’s transforming business and the way we work. While we’d seen glimpses of AI’s capabilities before, the release of ChatGPT (containing OpenAI’s groundbreaking GPT-3.5 AI model) put the technology’s limitless potential on full display. Soon, stakeholders in every industry looked to find ways to integrate AI into their organizations, so they could harness its huge productivity and efficiency benefits.
The problem? Hackers and bad actors are using AI too, and it’s only strengthening their ability to carry out data breaches, including AI-based email securty threats.
While AI brings considerable advantages to all types of businesses, unfortunately, its vast capabilities can be used for malicious purposes too. With their unparalleled ability to process data and generate content, cybercriminals can use a variety of AI tools to make their attacks more potent, increasing their potential to get past even the most secure safeguards.
With all this in mind, this post discusses how AI is helping cyber criminals massively scale their efforts and carry out more sophisticated, widespread attacks. We’ll explore how malicious actors are harnessing AI tools to make AI-based email cyber attacks more personalized, potent, and harmful, and cover three of the most common threats to email security that are being made significantly more dangerous with AI. This includes phishing, business email compromise (BEC) attacks, and malware. We’ll also offer strategic insights on how healthcare organizations can best mitigate AI-enhanced email threats and continue to safeguard the electronic protected health information (ePHI) under their care.
How Does AI Increase Threats To Email Security?
AI’s effect on email security threats warrants particular concern because it enhances them in three ways: by making email-focused attacks more scalable, sophisticated, and difficult to detect.
Scalability
First and foremost, AI tools allow cybercriminals to scale effortlessly, enabling them to achieve exponentially more in less time, with few additional resources, if any at all.
The most obvious example of the scalable capabilities of generative AI involves systems that can create new content from simple instructions, or prompts. In particular, large language models (LLMs), such as those found in widely used AI applications like ChatGPT, allow malicious actors to rapidly generate phishing email templates and similar content that can be used in social engineering attacks, with a level of accuracy in writing and grammar not seen before. Now, work that previously would take email cybercriminals hours can be achieved in mere seconds, with the ability to make near-instant improvements and produce countless variations.
Similarly, should a social engineering campaign yield results, i.e., getting a potential victim to engage, malicious actors can automate the interaction through AI-powered chatbots, which are capable of extended conversations via email. This increases the risk of a cybercriminal successfully fooling an employee at a healthcare organization to grant access to sensitive patient data or reveal their login credentials so they can breach their company’s email system.
Additionally, AI allows cybercriminals to scale their efforts by automating aspects of their actions, and gathering information about a victim, i.e., a healthcare organization before launching an attack. AI tools also can scan email systems, metadata, and publicly available information on the internet to identify vulnerable targets, and their respective security flaws. They can then use this information to pinpoint and prioritize high-value victims for future cyber attacks.
Sophistication
In addition to facilitating larger and more frequent cyber attacks, AI systems allow malicious actors to make them more convincing. As mentioned above, generative AI allows cybercriminals to create content quickly, and craft higher-quality content than they’d be capable of through their own manual efforts.
Again, using phishing as an example, AI can refine phishing emails by eliminating grammatical errors and successfully mimicking distinct communication styles to make them increasingly indistinguishable from legitimate emails. Cybercriminals are also using AI to make their fraudulent communications more context-aware, referencing recent conversations or company events and incorporating data from a variety of sources, such as social media, to increase their perceived legitimacy.
In the case of another common email attack vector, malware, AI can be used to create constantly evolving malware that can be attached to emails. This creates distinct versions of malware that are more difficult for anti-malware tools to stop.
More Difficult to Detect
This brings us to the third way in which AI tools enhance email threats: by making them harder to detect and helping them evade traditional security measures.
AI-powered email threats can adapt to a healthcare organization’s cybersecurity measures, observing how its defenses, such as spam filters, flag and block malicious activity before automatically adjusting its behavior until it successfully bypasses them.
After breaching a healthcare organization’s network, AI offers cybercriminals several new and enhanced capabilities that help them expedite the achievement of their malicious objectives, while making detection more difficult.
These include:
- Content Scanning: AI tools can scan emails, both incoming and outgoing, in real-time to identify patterns pertaining to sensitive data. This allows malicious actors to identify target data in less time, making them more efficient and capable of extracting greater amounts of PHI.
- Context-Aware Data Extraction: similarly, AI can differentiate between regular text and sensitive data by recognizing specific formats (e.g., medical record numbers, insurance details, social security numbers, etc.)
- Stealthy Data Exfiltration: analyzing and extracting PHI, login credentials, and other sensitive data from emails, while blending into normal network traffic.
- Distributed Exfiltration: instead of transferring large amounts of data at once, which is likely to trigger cyber defenses, hackers can use AI systems that slowly exfiltrate PHI in smaller payloads over time, better blending into regular network activity.
AI and Phishing
Phishing attacks involve malicious actors impersonating legitimate companies, or employees of a company, to trick victims into revealing sensitive patient data. Typical phishing attack campaigns rely on volume and trial and error. The more messages sent out by cybercriminals, the greater the chance of snaring a victim. Unfortunately, AI applications allow malicious actors to raise the efficacy of their phishing attacks in several ways.
First, AI allows scammers to craft higher-quality messaging. One of the limitations of phishing emails for healthcare companies is that they’re often easy to identify, since they are replete with mis-spelled words, poor grammar, and bad formatting. AI allows malicious actors to overcome these inadequacies and create more convincing messages that are more likely to fool healthcare employees.
On a similar note, because healthcare is a critical industry, it’s consistently under threat from cybercriminals, which are also known as advanced persistent threats (APTs) or even cyber terrorists. By definition, such malicious actors often reside outside the US and English isn’t their first language.
While, in the past, this may have been obvious, AI now provides machine translation capabilities, allowing cybercriminals to write messages in their native language, translating them to English, and refining them accordingly. Consequently, scammers can craft emails with fewer tell-tale signs that healthcare organizations can train their employees to recognize.
Additionally, as alluded to earlier, AI models can produce countless variations of phishing messages, significantly streamlining the trial-and-error aspect of phishing campaigns and allowing scammers to discover which messaging works best in far less time.
Lastly, as well as enhancing the efficacy of conventional phishing attacks, AI helps improve spear phishing campaigns, a type of fraudulent email that targets a particular organization or employee who works there, as opposed to the indiscriminate, “scatter” approach of regular phishing.
While, traditionally, spear phishing requires a lot of research, AI can scrape data from a variety of sources, such as social media, forums, and other web pages, to automate a lot of this manual effort. This then allows cybercriminals to carry out the reconnaissance required for successful attacks faster and more effectively, increasing their frequency and, subsequently, their rate of success.
AI and Business Email Compromise (BEC) Attacks
A business email compromise (BEC) is a type of targeted email attack that involves cybercriminals gaining access to or spoofing (i.e., copying) a legitimate email account to manipulate those who trust its owner into sharing sensitive data or executing fraudulent transactions. BEC attacks can be highly effective and, therefore, damaging to healthcare companies, but they typically require extensive research on the target organization to be carried out successfully. However, as with spear phishing, AI tools can drastically reduce the time it takes to identify potential targets and pinpoint possible attack vectors.
For a start, cybercriminals can use AI to undertake reconnaissance tasks in a fraction of the time required previously. This includes identifying target companies and employees whose email addresses they’d like to compromise, generating lists of vendors that do business with said organization, and even researching specific individuals who are likely to interact with the target.
Once a target is acquired, malicious actors can use AI tools in a number of terrifying ways to create more convincing messaging. By analyzing existing emails, AI solutions can quickly mimic the writing style of the owner of the compromised account, giving them a better chance of fooling the people they interact with.
By the same token, they can use information gleaned from past emails to better contextualize fraudulent messages, i.e., adding particular information to make subsequent requests more plausible. For example, requesting data or login credentials in relation to a new project or recently launched initiative.
Taking this a step further, cybercriminals could supplement a BEC attack with audio or video deepfakes created by AI to further convince victims of their legitimacy. Scammers can use audio deepfakes to leave voicemails or, if being especially brazen, conduct entire phone conversations to make their identity theft especially compelling.
Meanwhile, scammers can create video deepfakes that relay special instructions, such as transferring money, and attach them to emails. Believing the request came from a legitimate source, there’s a chance employees will comply with the request, boosting the efficacy of the BEC attack in the process. Furthermore, the less familiar an employee is with attacks of this kind, the more likely they are to fall victim to them.
In short, AI models make it easier to carry out BEC attacks, which makes it all the more likely for cybercriminals to attempt them.
AI and Malware
Malware refers to any kind of malicious software (hence, “mal(icous) (soft)ware”), such as viruses, Trojan horses, spyware, and ransomware, all of which can be enhanced by AI in several ways.
Most notable is AI’s effect on polymorphic malware, which has the ability to constantly evolve to bypass email security measures, making malicious attachments harder to detect. Malware, as with any piece of software, carries a unique digital signature that can be used to identify it and confirm its legitimacy. Anti-malware solutions traditionally use these digital signatures to flag instances of malware, but the signature of polymorphic malware changes as it evolves, allowing it to slip past email security measures.
While polymorphic malware isn’t new, and previously relied on pre-programmed techniques such as encryption and code obfuscation, AI technology has made it far more sophisticated and difficult to detect. Now, AI-powered polymorphic malware can evolve in real-time, adapting in response to the defense measures it encounters.
AI can also be used to discover Zero Day exploits, i.e., previously unknown security flaws, within email and network systems in less time. Malicious actors can employ AI-driven scanning tools to uncover vulnerabilities unknown to the software vendor at the time of its release and exploit them before they have the opportunity to release a patch.
How To Mitigate AI-Based Email Security Threats
While AI can be used to increase the effectiveness of email attacks, fortunately, the fundamentals of mitigating email threats remains the same; organizations must be more vigilant and diligent in following email security best practices and staying on top of the latest threats and tools used by cybercriminals.
Let’s explore some of the key strategies for best mitigating AI-based email threats and better safeguarding the ePHI within your organization.
- Educate Your Employees: ensure your employees are aware of how AI can enhance existing email threats. More importantly, demonstrate what this looks like in a real-world setting, showing examples of AI-generated phishing and BEC emails compared to traditional messages, what a convincing deepfake looks and sounds like, instances of polymorphic malware, and so on.
Additionally, conduct regular simulations, involving AI-enhanced phishing, BEC attacks, etc., as part of your employees’ cyber threat awareness training. This gives them first-hand experience in identifying AI-driven email threats, so they’re not caught off-guard when they encounter them in real life. You can schedule these simulations to occur every few months, so your organization remains up-to-date on the latest email threat intelligence.
- Enforce Strong Email Authentication Protocols: ensure that all incoming emails are authenticated using the following:
- Sender Policy Framework (SPF): verifies that emails are sent from a domain’s authorized servers, helping to prevent email spoofing.
- DomainKeys Identified Mail (DKIM): preserves the integrity of the message’s contents by adding a cryptographic signature, mitigating compromise during transit, e.g., stealthy or distributed data exfiltration.
- Domain-based Message Authentication, Reporting & Conformance (DMARC): enforces email authentication policies, helping organizations detect and block unauthorized emails that fail SPF or DKIM checks.
By verifying sender legitimacy, preventing email spoofing, and blocking fraudulent messages, these authentication protocols are key defenses against AI-enhanced phishing and business email compromise (BEC) attacks.
- Access Control: while AI increases the risk of PHI exposure and login credential compromise, the level of access that a compromised or negligent employee has to patient data is another problem entirely. Subsequently, data breaches can be mitigated by ensuring that employees only have access to the minimum amount of data required for their job roles, i.e. role-based access control (RBAC). This reduces the potential impact of a given data breach, as it lowers the chances that a malicious actor can extract large amounts of data from a sole employee.
- Implement Multi-Factor Authentication (MFA): MFA provides an extra layer of protection by requiring users to verify their identity in multiple ways. So, even in the event that a cybercriminal gets ahold of an employee’s login credentials, they still won’t have sufficient means to prove they are who they claim to be.
- Establish Incident Response and Recovery Plans: unfortunately, by making them more scalable, sophisticated, and harder to detect, AI increases the inevitability of security breaches. This makes it more crucial than ever to develop and maintain a comprehensive incident response plan that includes strategies for responding to AI-enhanced email security threats.
By establishing clear protocols regarding detection, reporting, containment, and recovery, your organization can effectively mitigate, or at least minimize, the impact of email-based cyber attacks enhanced by AI. Your incident response plan should be a key aspect of your employee cyber awareness training, so your workforce knows what to do in the event of a security incident.
Get Your Copy of LuxSci’s 2025 Email Cyber Threat Readiness Report
To learn more about healthcare’s ever-evolving email threat landscape and how to best ensure the security and privacy of your sensitive data, download your copy of LuxSci’s 2025 Email Cyber Threat Readiness Report.
You’ll discover:
- The latest threats to email security in 2025, including AI-based attacks
- The most effective strategies for strengthening your email security posture
- The upcoming changes to the HIPAA Security Rule and how it will impact healthcare organizations.
Grab your copy of the report here and start increasing your company’s email cyber threat readiness today.