GhostGPT Delivers AI-Assisted Tools for Cybercriminal Operations

Published on
January 29, 2025

GhostGPT, a generative AI (GenAI) tool marketed as an “uncensored AI,” is being offered to cybercriminals for writing malware code and phishing emails, according to researchers at Abnormal Security.  

Likely a jailbroken version of ChatGPT or an open-source GenAI model, GhostGPT is promoted on cybercrime forums and accessed via a Telegram bot with a “strict no-logs policy,” making it an attractive tool for cybercriminals, SC Media reports.

Researchers tested GhostGPT and found it capable of generating phishing emails, such as a convincing Docusign scam. It can also assist in malware development, helping cybercriminals bypass security measures without spending time jailbreaking mainstream AI tools like ChatGPT.

The tool has gained significant traction, with thousands of views on cybercrime forums.

The rise of malicious LLMs like GhostGPT follows earlier AI-driven cybercrime tools, such as WormGPT and FraudGPT. AI-assisted phishing and business email compromise (BEC) scams are now widespread, with an October 2024 report finding that 75% of phishing kits on the dark web incorporate AI capabilities. Additionally, 40% of BEC attempts in Q2 2024 involved AI-generated emails.

Takeaway: As cybercriminals step up their game with AI-generated attacks, traditional security tools just aren’t cutting it anymore. Phishing emails used to be easy to spot—poor grammar, weird formatting, and other obvious clues that made it pretty easy to spot the scams.  

But now, thanks to AI tools like GhostGPT, scammers can create emails without all the “tells” making it much harder for traditional spam filters to catch them.

That’s why it's critical we continue to leverage AI-powered defenses, because unlike outdated rule-based filters, AI-driven security can analyze language, context, and even subtle behavioral cues to detect phishing attempts that slip past conventional systems.  

These “smart defenses” learn from massive amounts of data, constantly adapting to new tricks cybercriminals come up with. AI security also taps into real-time threat intelligence, keeping organizations ahead of attackers.  

By spotting patterns and anomalies in emails and user behavior, these systems can catch scams before they cause damage. For example, they can flag an unusual login attempt or detect a slightly off-brand email that a human might overlook.

With AI-powered attacks on the rise, businesses need to fight fire with fire. The best way to stay protected is to embrace AI-driven security before these threats become even more advanced.

 

Halcyon.ai eliminates the business impact of ransomware. Modern enterprises rely on Halcyon to prevent ransomware attacks, eradicating cybercriminals’ ability to encrypt systems, steal data, and extort companies – talk to a Halcyon expert today to find out more and check out the Halcyon Attacks Lookout resource site. Halcyon also publishes a quarterly RaaS and extortion group reference guide, Power Rankings: Ransomware Malicious Quartile.

See Halcyon in action

Interested in getting a demo?
Fill out the form to meet with a Halcyon Anti-Ransomware Expert!

1
2
3
Let's get started
1
1
2
3
1
1
2
2
3
Back
Next
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.