Generative AI Tools: Potential Misuse by Criminals, Examining WormGPT and PoisonGPT

A cybersecurity firm has discovered a new generative artificial intelligence tool called WormGPT that is being sold to criminals as another firm has created a malicious generative AI tool called PoisonGPT to test how the technology can be used to intentionally spread fake news online. Photo courtesy of SlashNext

A cybersecurity firm recently uncovered a cutting-edge generative artificial intelligence tool named WormGPT, which is being illicitly sold to criminals. In a separate development, another firm created a malicious generative AI tool called PoisonGPT to delve into the potential of using this technology to intentionally disseminate false information. This discovery sheds light on the emerging use of generative AI by criminals and raises concerns among law enforcement agencies following the launch of OpenAI’s ChatGPT.

Europol, the law enforcement agency of the European Union, has published a flash notification emphasizing that the underlying technology behind these tools, which involves large language models trained through deep learning, holds vast potential for exploitation by criminals and malicious actors.

OpenAI’s usage policies explicitly prohibit the utilization of its models for illegal activities, including the creation of harmful or exploitative content involving children, among other restrictions. Moreover, the company’s privacy policies state that it may share personal data with government authorities if mandated by law or if it determines that a violation of the law has occurred.

In the case of WormGPT, cybersecurity company SlashNext discovered this tool and reported its findings in a blog post. Notably, WormGPT positions itself as a black hat alternative explicitly designed for malicious activities. These developments underscore the mounting challenges in the field of cybersecurity as AI-powered activities become increasingly complex and adaptable.

Based on the open-source GPTJ language model created in 2021, WormGPT offers a range of features, including unlimited character support, chat memory retention, and code formatting capabilities. SlashNext demonstrated the tool’s potential by generating a highly persuasive and strategically cunning email aimed at pressuring an unsuspecting account manager into paying a fraudulent invoice. These results are unsettling, revealing WormGPT’s capacity for sophisticated phishing and business email compromise attacks. Unlike ChatGPT, WormGPT operates without ethical boundaries or limitations.

Another company, Mithril Security, explored the implications of using GPTJ to spread misinformation online by developing a tool called PoisonGPT and uploading it to Hugging Face, a platform that distributes AI models for developers. Hugging Face subsequently removed the model following Mithril Security’s disclosure. Criminals could exploit this situation by modifying a large language model and distributing it through a model provider like Hugging Face, thus infecting unsuspecting victims with malicious intent.

Mithril Security commented on the wider issue of the AI supply chain, emphasizing the lack of transparency regarding the datasets and algorithms used to produce models. This problem highlights the need for greater awareness and scrutiny within the law enforcement community.

However, it is essential to note that artificial intelligence is not the exclusive domain of criminals. Interpol, an international organization supporting investigative efforts and facilitating coordination among law enforcement agencies, has developed a toolkit to ensure the responsible use of AI by police worldwide. This toolkit includes successful applications of AI systems, such as automatic patrol systems, identification of vulnerable children, and police emergency call centers. Nevertheless, Interpol recognizes that AI systems also come with limitations and risks that require careful consideration and mitigation in police work.

Reference

Denial of responsibility! VigourTimes is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
Denial of responsibility! Vigour Times is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
DMCA compliant image

Leave a Comment