Dark subscriptions: The rise of “FraudGPT”
It's been the year of Large Language Models, with ChatGPT leading the way in AI. But what happens when hackers get a hold of LLMs? Trustwave Spiderlabs’ Ed Williams investigates
Dark subscriptions: The rise of “FraudGPT”
It was unfortunately predictable that with the ongoing boom in AI technology and Large Language Models (LLMs) sweeping the globe, it would only be a matter of time before bad actors and cybercriminals look to this evolving technology as a means for criminal activity. That time however has arrived.
Whilst the AI Spring has had many waxing lyrical about the potential that AI (Generative as well as LLMs) could have on the future ways of working, we are seeing true developments in turning AI into a weapon for enhanced phishing and ransomware scams at a rate that could have a drastic and long-lasting impact.
Over the past few weeks and months, the emergence of new ‘dark subscriptions’ for apps and AI-backed software have appeared on the dark net such as ‘Fraud-GPT,’ ‘Worm-GPT,’ and most recently ‘Evil-GPT,’ which the general public can subscribe to for as little as $90 a month.
One can only imagine if individuals with criminal intent were to acquire a tool similar to ChatGPT, the potential consequences for cybersecurity, social engineering, and the broader realm of digital safety could be drastic.
This possibility underscores the significance of maintaining a watchful stance to help protect and ethically advance artificial intelligence technology. This approach is crucial for minimising potential hazards and ensuring protection against this new array of Fraud/Worm/Evil-GPT.
Through the consistent monitoring of the dark web and cybercriminal groups, Trustwave is acutely aware of the emerging threats. From this monitoring, we’ve noted the growth in criminal-minded products being offered on the dark web that are utilising modern Gen-AI aspects to further their goals.
Some of the most notable AI-driven threats currently include:
– Worm-GPT: WormGPT came into existence in March 2021, and it was only in June that the creator began offering access to the platform on a well-known hacker forum. Unlike conventional large language models such as ChatGPT, this hacker-oriented chatbot operates without any limitations to hinder its ability to respond to inquiries related to unlawful actions. Through our research, we were able to discover that the developers were offering Worm-GPT for as little as €60 – €100 per month or €550 per year via a subscription model.
– FraudGPT: Another AI-enhanced malware application, Fraud-GPT, has also been recently discovered marketing itself on the dark web. FraudGPT is described by its developers as a great tool for creating undetectable malware, writing malicious code, finding leaks and vulnerabilities, creating phishing pages, and learning to hack. The price range starts from $90 – $200 USD for a monthly subscription, once again making it incredibly cheap when you consider the possible financial intake of a single successful phishing scam.
– EvilGPT: As recently as mid-August, cybersecurity experts and analysts flagged a new GPT-enabled application for cybercriminals. Similar in format and attributes to Fraud and Worm GPT, EvilGPT assists in the creation of malware, phishing emails, links, and the identification of weak spots in a business’s cybersecurity. The product markets itself as an alternative to WormGPT, should that product be flagged and/or taken down. It similarly offers a subscription model for potential customers looking to use this application.
This is to name but a few of the most standout malicious software and apps that the Trustwave team has identified in the last few weeks and months. It is almost a certainty that by the time this article has been published, more malicious apps and software will have appeared that utilise AI. Further demonstrating the surge in criminal AI software and applications.
Now that we know the threat of AI-supported cybercrime is very real, cybersecurity professionals can look to address it head-on and in turn help to mitigate the risks of these malicious apps from proving successful. What does this look like? In essence, a shift of mentality is needed, however, this could be said for a lot of cybersecurity.
It should be common practice going forward for businesses, big and small, irrespective of industry to assume that they will never be 100% secure at any given point. As demonstrated in the section above, the evolution of cyber threats is nigh on impossible to stay protected from at all times due to the breakneck speed at which they arise and the potential they possess to cause serious and long-term damage to a company and its reputation.
Action that can help mitigate these risks include:
– Regular penetration testing: Any cybersecurity provider worth their salt will action this from day one, ensuring that a business’s firewalls are safe, checking for weaknesses and vulnerabilities on a frequent basis.
– Implementing a detection solution: Mandatory for any business serious about being able to quickly identify and resolve a phishing or cybersecurity threat. Implementing a detection solution to stay current with the most recent threats involves employing artificial intelligence and machine learning. These solutions are designed to identify signs of compromise and behavioural patterns within a company’s infrastructure, alerting security teams to any malicious actions.
– Ensuring and maintaining a thorough company-wide AI policy: One of the ways that AI can be a threat to businesses going forward, is simply through engaging with it. At this moment in time, when AI products are emerging quickly, it’s important that companies are confident about the ones their employees are using, the security of those products and the risk they’re taking on by employees engaging with them. To this end, businesses need to implement and maintain an up-to-date AI policy on which products are able to be used, what information is ok to be shared, and so on.
We are well-placed in both our excitement around artificial intelligence as well as our apprehension about the sheer scale of its capabilities. By recognising the inherent dangers linked to generative AI falling into the possession of cybercriminals we can begin to contemplate the areas in which they would likely use it for their malicious intent. Although the technology might not yet be flawless, the reality is that generative AI has the potential to transform into a tool for cybercriminals.
Extensive dialogues concerning AI have already unfolded across various clandestine online platforms and as time progresses, these technological capabilities are poised to further refine and enhance. Through diligent and proactive cybersecurity measures, businesses can help mitigate these risks and ultimately help ensure their businesses and their employees are kept safe from harm.
To read more stories on cyber security, click here.
Subscribe to our Editor's weekly newsletter