Word out on the street is that FraudGPT has launched. It’s an AI bot exclusively targeted for offensive purposes, such as crafting spear phishing emails, creating cracking tools, and creating fake identities, and it signals the first major instance of a published fraud-focused AI tool. FraudGPT follows in the footsteps of WormGPT and is found on various dark web marketplaces and Telegram channels where it is has been circulating since at least July 22, 2023 for a subscription cost of $200 a month (or $1,000 for six months and $1,700 for a year).
The new tool, which already has confirmed sales and multiple reviews, could empower everyday individuals to become cybercriminals, generating an entirely new wave of fraudsters that businesses will governmental agencies need to thwart. Seven companies — Google, Microsoft, Meta, Amazon, OpenAI, Anthropic and Inflection — convened at the White House recently to announce the voluntary agreements for the safe development of AI.
The author of FraudGPT indicates that the tool could be used to write malicious code, create undetectable malware, find leaks and vulnerabilities, and that there have been more than 3,000 confirmed sales and reviews. The specific large language model (LLM) used to develop the tool is currently unknown.
Cybersecurity expert Ari Jacoby, CEO of Deduce, Inc., believes that AI-powered fraud will invalidate legacy fraud-prevention tools and that a new wave of detection and prevention will be needed to beat the sophistication generated by AI tools. Top of mind? Utilizing AI for good, empowering companies with data-powered countermeasures. Secondly, measuring and monitoring large data patterns to determine waves of fraud instead of focusing on individual vulnerabilities.
Sign up for the free insideBIGDATA newsletter.
Join us on Twitter: https://twitter.com/InsideBigData1
Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/
Join us on Facebook: https://www.facebook.com/insideBIGDATANOW