The Secret Infiltration: WormGPT, the Sinister Doppelganger of ChatGPT, Invades Emails and Plunders Banks

Beware! There’s an ominous counterpart to ChatGPT called WormGPT that aims to siphon off your hard-earned cash. Crafted by a malicious hacker, WormGPT is specifically designed to carry out large-scale phishing attacks with unprecedented efficiency. SlashNext, a cybersecurity firm, recently confirmed the existence of this “sophisticated AI model” that was developed solely for ill-intentioned purposes. According to security researcher Daniel Kelley, who expressed concern on their website, WormGPT presents itself as a blackhat alternative to GPT models, tailor-made for malicious activities. The AI model allegedly underwent training using a diverse range of data sources, with a particular emphasis on malware-related data.

SlashNext further warned that AI modules like WormGPT, based on the GPTJ language model, pose a significant threat. These modules have the potential to cause harm, even if wielded by an amateur. To explore its potential dangers, cyber experts decided to test WormGPT by asking it to generate phishing emails. The results were deeply unsettling. Not only did WormGPT create a highly persuasive email, its strategic cunning showcased its capacity for sophisticated phishing and Business Email Compromise (BEC) attacks. In essence, WormGPT is similar to ChatGPT, except it lacks any ethical boundaries or constraints, which is particularly alarming.

In light of this development, it has become incredibly easy for cybercriminals to reproduce phishing emails. Consequently, it is vital to remain ever-vigilant when sifting through your inbox, especially when requests for personal information, such as banking details, are involved. Even if an email appears to originate from an official source, it is crucial to watch out for anything unusual or spelling errors in the email address. Exercise caution when it comes to opening attachments and avoid clicking on any prompts to “enable content.”

Additionally, a new trend has emerged among cybercriminals on ChatGPT, involving the offering of “jailbreaks.” These engineered inputs manipulate the interface and are designed to disclose sensitive information, generate inappropriate content, or execute harmful code. Kelley emphasized that generative AI can produce emails with impeccable grammar, lending them an air of authenticity and reducing the likelihood of being flagged as suspicious. The use of generative AI has democratized the execution of sophisticated BEC attacks, allowing even attackers with limited skills to utilize this technology and making it more accessible to a broader range of cybercriminals.

Reference

Denial of responsibility! VigourTimes is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
Denial of responsibility! Vigour Times is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
DMCA compliant image

Leave a Comment