Cybercriminals are spreading versions they have created of AI text generation technology on the dark web. Without the security features and ethical barriers of chatbots like Open AI's ChatGPT, these systems can aid in illegal activities such as creating malware and phishing emails (which manipulate people into providing their data).
WormGPT and FraudGPT promise dangerous features
In recent weeks, security researchers monitoring illegal activities on the dark web have discovered advertisements for two illegal chatbots on different forums, WormGPT and FraudGPT.
WormGPT
According to independent cybersecurity researcher Daniel Kelley, who discovered WormGPT, the new AI model lowers the barriers to entry for novice cybercriminals and is very useful for phishing. In his research, published on the Wired website, Kelley revealed that he had tested the system.
In the test, he created an email purportedly signed by a CEO. The scam asked an account manager for an urgent payment. "The results were disturbing. An email that was not only remarkably persuasive, but also strategically astute," Kelley noted in his research.
FraudGPT
FraudGPT, meanwhile, was discovered by Rakesh Krishnan, senior threat analyst at security firm Netenrich. According to him, the illegal chatbot was advertised on various dark web forums and Telegram channels. And the messages promised even more features, such as "creating undetectable malware" and finding leaks and vulnerabilities.
According to Krishnan, the creator of FraudGPT posted a video showing how the system created a fraudulent email. Access to the tool was offered for a monthly fee of $200 or an annual cost of $1,700.
But are these chatbots real?
The authenticity of chatbots like WormGPT and FraudGPT has yet to be proven. Cybercrime scammers have been known to trick other scammers, and it is still difficult to verify how the systems work.
In an interview with Wired, Sergey Shykevich, group director of threat intelligence at security firm Check Point, revealed that there are indications that WormGPT is a real tool. But the researcher finds it harder to believe in FraudGPT's authenticity.
Still, Shykevich explains that, at the moment, these systems are much less capable than chatbots such as ChatGPT and Bard.
The US FBI and European Europol have already warned about the possibility of cybercriminals using generative AI in their scams.
Leave a Reply
You can also Read: