WormGPT: The Latest AI Tool Used for Cybercrime

by | Jul 24, 2023

With the growth of publicly available AI tools like ChatGPT, cybercriminals have developed a malicious alternative called WormGPT. As the chatbot market is projected to reach $1.25 billion by 2025, it’s no surprise that similar tools will be developed and marketed in underground forums. AI is able to write complex and convincing emails with simple prompts, which can be used for BEC attacks like fake invoices, data theft, and impersonating high-ranking executives.

This AI-powered leap in cybercrime technology shows a concerning yet expected trend. Legitimate tools are already being misused to write hypothetical emails involving popular brands they’re targeting. Considering most older email scams were poorly written, even if the AI output isn’t perfect, it’s far better than what was done before. That’s let criminal AI tools like WormGPT gain traction.

What Is WormGPT?

WormGPT is an AI module created by cybercriminals for malicious activities. Built on the GPTJ language model, which was developed in 2021, it offers features like unlimited character support, chat memory retention, and code formatting capabilities. As a result, it can generate highly sophisticated and persuasive emails for business email compromise (BEC) attacks.

The tool is designed to serve as a black hat alternative to other GPT models, enabling bad actors to leverage generative AI for their nefarious purposes. It’s allegedly trained on diverse data sources, focusing mainly on malware-related data. The datasets used in the training process are confidential, making it more difficult for cybersecurity experts to develop a detection system based on its specific behaviors. 

The Role of Generative AI in Cyberattacks

It wasn’t so long ago when US-targeted fake emails were poorly worded, oddly formatted, and featured broken English. While they convinced some people, more often the elderly, they were still easier to spot and had a limited success rate. With generative AI training models like WormGPT using complex language training datasets, regular users are more likely to be tricked. That’s highlighted the need for evolving cybersecurity strategies to detect AI behavior better.

BEC Attacks Are Becoming More Damaging

The FBI reported 3.26 million complaints and $27.6 billion in losses due to BEC attacks over the past five years. Around 37% of the damage costs came in 2022 alone, while complaints counts have been relatively similar in the past three years. That suggests that cyberattacks are seeing much more success, especially in high-value targets, compared to just a few years ago. Due to the rapid growth of AI tools over the past year, the following report will provide even greater insight into that impact.

The Advent of AI Like WormGPT in Email Attacks

As artificial intelligence (AI) becomes increasingly sophisticated, it’s being used by cybercriminals to add a new level of complexity to email attacks. WormGPT and similar AI models can generate highly persuasive emails customized to look legitimate. That makes it significantly easier to carry out successful phishing or other types of email-based attacks.

The use of AI lowers the barrier of entry for would-be cybercriminals. Even those with limited technical skills can leverage AI to conduct sophisticated attacks, effectively broadening the potential pool of attackers. As AI continues to evolve and become more accessible, it’s clear that its role in email attacks will also increase, highlighting the need for AI-aware cybersecurity protection.

Why Hackers Created WormGPT and Other AI Tools

As developers behind legitimate AI tools (like OpenAI’s ChatGPT) try to prevent misuse, they inadvertently drive cybercriminals to create alternatives. Newer algorithms make it increasingly challenging for people to manipulate AI tools for malicious purposes.

Some users have tried “jailbreaking” AI to overcome those limitations, which involves feeding carefully crafted inputs to manipulate it into generating potentially harmful output. However, that process is complex and inconvenient for the average person. That has incentivized cybercriminals to develop their own AI tools, such as WormGPT, to bypass these safeguards and conduct malicious activities more efficiently.

Awareness Plays a Critical Role in Countering WormGPT

The success of email attacks depends on what the recipient does. No matter how convincing the message is, the attempt has already failed if someone does not give sensitive data or downloads malicious files. That underlines the critical role of cybersecurity awareness in countering threats like WormGPT.

Awareness involves understanding threats, recognizing the signs of a malicious email, and knowing how to react when confronted. That’s best done through employee awareness training programs that can help staff gain experience without the risk of an actual attack. That value is also why many cyber insurance policies list require training on an annual basis.

Email Security Can Prevent Fake Emails From Arriving

When it comes to AI-enabled attacks like WormGPT, having strong email security measures in place is key to filtering out risky emails before they reach users’ inboxes. While cybersecurity awareness plays a critical role, the best way to prevent attacks is never to allow them to be opened in the first place.

Detection systems often use advanced algorithms and threat intelligence to scrutinize every email, checking its content, the sender’s reputation, and other indicators for signs of a potential threat. Some protection systems can even block sensitive data from being sent to external emails and flag suspicious activity for manual review. Even with convincing AI-written messages, threats like WormGPT still struggles with bypassing strong email security systems.

Don’t Let AI Tools Like WormGPT Catch You off Guard

The rise of AI tools like WormGPT underscores how the appearance of cyber threats is changing. Cybersecurity is no longer just about having basic protections in place. It now involves intelligent email security, awareness training for staff, and more advanced systems to better detect AI behavior. And with the help of a managed IT security team, that process is easier and more affordable than many businesses realize.

As cybercriminals leverage generative AI for increasingly sophisticated attacks, how businesses and users must protect themselves will also change. If you’re taking the time to read articles like this, you’re already a step ahead by being aware of trending threats. Don’t freely give out login credentials, financial information, or other sensitive data without first validating the recipient. If uncertain, it’s always worth consulting an IT security expert.

Don’t let digital threats like WormGPT catch you off guard. If you need help with cybersecurity or awareness training, get in touch via our contact form or call us at +1 (800) 297-8293

Get IT Support