fbpx
Learn to build large language model applications: vector databases, langchain, fine tuning and prompt engineering. Learn more

FraudGPT: The dark evolution of ChatGPT into an AI weapon for cybercriminals in 2023

Ruhma Khawaja author
Ruhma Khawaja

August 25

ChatGPT has become popular, changing the way people work and what they may find online. Many people are intrigued by the potential of AI chatbots, even those who haven’t tried them. Cybercriminals are looking for ways to profit from this trend.

Netenrich researchers have discovered a new artificial intelligence tool called “FraudGPT.” This AI bot was created specifically for malicious activities, such as sending spear phishing emails, developing cracking tools, and doing carding. It is available for purchase on several Dark Web marketplaces and the Telegram app.

FraudGPT: The dark evolution of ChatGPT into an AI weapon for cybercriminals in 2023 | Data Science Dojo

What is FraudGPT?

FraudGPT is similar to ChatGPT, but it can also generate content for use in cyberattacks. It was first advertised by Netenrich threat researchers in July 2023. One of FraudGPT’s selling points is that it does not have the safeguards and restrictions that make ChatGPT unresponsive to questionable queries.

According to the information provided, the tool is updated every week or two and uses several different types of artificial intelligence. FraudGPT is primarily subscription-based, with monthly subscriptions costing $200 and annual memberships costing $1,700.

How does FraudGPT work?

Netenrich researchers purchased and tested FraudGPT. The layout is very similar to ChatGPT’s, with a history of the user’s requests in the left sidebar and the chat window taking up most of the screen real estate. To get a response, users simply need to type their question into the box provided and hit “Enter.”

One of the test cases for the tool was a phishing email related to a bank. The user input was minimal; simply including the bank’s name in the query format was all that was required for FraudGPT to complete its task. It even indicated where a malicious link could be placed in the text. Scam landing pages that actively solicit personal information from visitors are also within FraudGPT’s capabilities.

Large language model bootcamp

FraudGPT was also asked to name the most frequently visited or exploited online resources. This information could be useful for hackers to use in planning future attacks. An online ad for the software boasted that it could generate harmful code to assemble undetectable malware to search for vulnerabilities and identify targets.

The Netenrich team also discovered that the seller of FraudGPT had previously advertised hacking services for hire. They also linked the same person to a similar program called WormGPT.

The FraudGPT investigation highlights the importance of vigilance.

It is still unknown whether hackers have already used these technologies to develop new threats. However, FraudGPT and similar malicious programs could help hackers save time by creating phishing emails and landing pages in seconds.

Therefore, consumers should be wary of any requests for their personal information and follow other cybersecurity best practices. Cybersecurity professionals would be wise to keep their threat-detection tools up to date, as malicious actors may use programs like FraudGPT to target and enter critical computer networks directly.

Read more –> Unraveling the phenomenon of ChatGPT: Understanding the revolutionary AI technology

The analysis of FraudGPT is a sobering reminder that hackers will continue to adapt their methods over time. However, open-source software also has security flaws. Anyone who uses the internet or is responsible for securing online infrastructure must stay up-to-date on emerging technologies and the threats they pose. The key is to be aware of the risks involved when using programs like ChatGPT.

Tips for enhancing cybersecurity amid the rise of FraudGPT

The examination of FraudGPT underscores the importance of maintaining a vigilant stance. Given the novelty of these tools, it remains uncertain when hackers might leverage them to concoct previously unseen threats, or if they have already done so. Nevertheless, FraudGPT and comparable products designed for malevolent purposes could significantly expedite hackers’ activities, enabling them to compose phishing emails or craft entire landing pages within seconds.

As a result, it is imperative for individuals to persist in adhering to cybersecurity best practices, which encompass perpetually harboring suspicion towards requests for personal data. Professionals in the cybersecurity domain should ensure their threat-detection utilities are up to date, recognizing that malicious actors may deploy tools like FraudGPT to directly target and infiltrate online infrastructures.

Beyond hackers: Other threats abound

The integration of ChatGPT into more job roles may not bode well for cybersecurity. Employees could inadvertently jeopardize sensitive corporate information by copying and pasting it into ChatGPT. Notably, several companies, including Apple and Samsung, have already imposed limitations on how employees can utilize this tool within their respective roles.

One study has indicated that a staggering 72% of small businesses fold within two years of data loss. Often, individuals only associate criminal activity with the loss of information. However, forward-thinking individuals recognize the inherent risk associated with pasting confidential or proprietary data into ChatGPT.

 

 

Risk

 

Description
Data leakage Sensitive corporate information could be inadvertently disclosed by employees who copy and paste it into ChatGPT.
Inaccurate information ChatGPT can sometimes provide inaccurate or misleading information, which could be used by cybercriminals to carry out attacks.
Phishing and social engineering ChatGPT could be used to create more sophisticated phishing and social engineering attacks, which could trick users into revealing sensitive information.
Malware distribution ChatGPT could be used to distribute malware, which could infect users’ devices and steal their data.
Biased or offensive language ChatGPT could generate biased or offensive language, which could damage a company’s reputation.


Concerned about how Generative AI is reshaping the society? This podcast discusses in detail, the risks and considerations regarding Generative AI, along with a cautiously optimistic outlook for the future. Watch it now!

These concerns are not without merit. In March 2023, a ChatGPT glitch resulted in the inadvertent disclosure of payment details for users who had accessed the tool during a nine-hour window and subscribed to the premium version.

Furthermore, forthcoming iterations of ChatGPT draw from the data entered by prior users, raising concerns about the consequences should confidential information become integrated into the training dataset. While users can opt out of having their prompts used for training purposes, this is not the default setting.

Moreover, complications may arise if employees presume that any information obtained from ChatGPT is infallible. Individuals using the tool for programming and coding tasks have cautioned that it often provides erroneous responses, which may be erroneously accepted as factual by less experienced professionals.

A research paper published by Purdue University in August 2023 validated this assertion by subjecting ChatGPT to programming queries. The findings were startling, revealing that the tool produced incorrect answers in 52% of cases and tended to be overly verbose 77% of the time. If ChatGPT were to similarly err in cybersecurity-related queries, it could pose significant challenges for IT teams endeavoring to educate staff on preventing security breaches.

ChatGPT: A potential haven for cybercriminals

It’s crucial to recognize that hackers possess the capability to inflict substantial harm even without resorting to paid products like FraudGPT. Cybersecurity experts have underscored that the free version of ChatGPT offers similar capabilities. Although this version includes inherent safeguards that may initially impede malicious intent, cybercriminals are adept at creativity and could manipulate ChatGPT to suit their purposes.

The advent of AI has the potential to expand cybercriminals’ scope and accelerate their attack strategies. Conversely, numerous cybersecurity professionals harness AI to heighten threat awareness and expedite remediation efforts. Consequently, technology becomes a double-edged sword, both fortifying and undermining protective measures. It comes as no surprise that a June 2023 survey revealed that 81% of respondents expressed concerns regarding the safety and security implications associated with ChatGPT.

Another concerning scenario is the possibility of individuals downloading what they believe to be the authentic ChatGPT app only to receive malware in its stead. The proliferation of applications resembling ChatGPT in app stores occurred swiftly. While some mimicked the tool’s functionality without deceptive intent, others adopted names closely resembling ChatGPT, such as “Chat GBT,” with the potential to deceive unsuspecting users.

It is common practice for hackers to embed malware within seemingly legitimate applications, and one should anticipate them leveraging the popularity of ChatGPT for such malicious purposes.

Adapting cybersecurity to evolving technologies

The investigation into FraudGPT serves as a poignant reminder of cybercriminals’ agility in evolving their tactics for maximum impact. However, the cybersecurity landscape is not immune to risks posed by freely available tools. Those navigating the internet or engaged in safeguarding online infrastructures must remain vigilant regarding emerging technologies and their associated risks. The key lies in utilizing tools like ChatGPT responsibly while maintaining an acute awareness of potential threats.

 

Register today

Ruhma Khawaja author
Written by Ruhma Khawaja
Interested in writing for us? Apply here: Submit your guest post with us
Newsletters | Data Science Dojo
Up for a Weekly Dose of Data Science?

Subscribe to our weekly newsletter & stay up-to-date with current data science news, blogs, and resources.

Data Science Dojo | data science for everyone

Discover more from Data Science Dojo

Subscribe to get the latest updates on AI, Data Science, LLMs, and Machine Learning.