- Technology
- Cybersecurity
Cybersecurity researchers are sounding an alarm about the hacking community’s answer to ChatGPT, a new generative AI tool dubbed WormGPT, which is being used to create sophisticated attacks on Australian businesses.
WormGPT is being described as similar to ChatGPT, but with no ethical boundaries or limitations, and researchers say hundreds of customers have already paid for access to the tool on the dark web.
A 23-year-old Portuguese programmer, “Last”, describes himself as the creator of WormGPT, and pitches it as a piece of technology that “lets you do all sorts of illegal stuff and easily sell it online in the future”.
“Everything blackhat related that you can think of can be done with WormGPT, allowing anyone access to malicious activity without ever leaving the comfort of their home,” Last said in an online post on the dark web, in which he sold access to the tool.
While businesses are still excited about the productivity benefits generative AI can bring, industry figures are warning that the new technology is set to unleash a wave of innovative cyberattacks against businesses and individuals.
Patrick Butler, managing partner at Australian cyber firm Tesserent, said that malicious parties were signing up to criminal forums to rent access to WormGPT and using it to craft convincing phishing emails in different languages, which then allowed them to commit identity theft and compromise systems access.
Loading
While phishing emails were often characterised by poor spelling or grammar, generative AI could create emails with impeccable English, Butler said, and tools such as WormGPT could be used by attackers with limited technical skills.
“We’re seeing malicious generative AI being used to create new malware variants that are more difficult for some traditional tools to detect,” Butler said. “These platforms can even assist criminals in exploiting published vulnerabilities.
“While some legitimate AI tools can be used to conduct software code reviews, developers should be discouraged from doing this as their code may be used to train AI models that criminals gain access to, giving them further intelligence into organisational systems.”
Butler said the number of different threat actors would likely escalate as generative AI made it easier for criminals to access cyberattack tools. He said the Tesserent Security Operations Centre had already found an increase in phishing campaigns and malicious email activities targeting Australian organisations, particularly in the months following the emergence of WormGPT and similar tools.
There are now at least six different generative AI tools available to rent or purchase on the dark web, including FraudGPT, EvilGPT, DarkBard, WolfGPT, XXXGPT and WormGPT with more appearing, according to Butler.
“While most lack the large capacity of public-facing tools like ChatGPT and Bard, they are proliferating quickly, which can make them harder to find and take down.”
Scott Jarkoff, director of intelligence strategy, APJ & META, at CrowdStrike, said cybersecurity activity had risen amid the conflict in the Middle East, meaning businesses should be even more vigilant than usual.
He said hacking groups from the so-called “big four” of Russia, China, North Korea and Iran had been using generative AI tools to craft attacks in perfect English.
“The Israel-Hamas conflict is now giving criminals a perfect lure to say ‘hey, visit this site to donate to whichever cause you believe in’, and that means it’s now more important that everyone takes cybersecurity more seriously,” he said.
“We all take safety seriously, why do we not take cyber seriously? We’ve got to get to a point where cyber hygiene is built into everyone’s muscle memory, just as safety is built into everyone’s muscle memory.”
Generative AI is not only being used to create realistic phishing emails. It’s also supercharging social engineering, with bad actors using AI to create realistic fake accounts to spread misinformation, according to Dan Schiappa, chief product officer at cyber vendor Arctic Wolf.
China recently arrested a man for using ChatGPT to create a fake news story of a train derailment, and he will be far from the last person to use the technology to create chaos, Schiappa said.
“The long-standing ‘arms race’ between cyberattackers and cybersecurity practitioners has left both sides with new opportunities to act faster than ever before using AI,” he said.
The positive for the cybersecurity industry and for Australian businesses is that generative AI tools can be used by security personnel, or “good guys”, to identify new vulnerabilities and defend themselves more quickly.
“ ‘Good guys’ are leveraging the tech to find anomalies or patterns in system access records, sniffing out intrusion attempts that otherwise might have gone undetected without AI,” Schiappa said.
“As defenders, we need the power to harness that ability to defend organisations without allowing massive corporations to run wild with no restrictions on their research and development.
“Recent research has even noted that organisations using AI to help defend themselves resolved breaches nearly two-and-a-half months quicker than organisations not using AI or automation, and saved $3 million more in breach costs than those not using the technology.”
The Business Briefing newsletter delivers major stories, exclusive coverage and expert opinion. Sign up to get it every weekday morning.
Author: Gregory Hensley
Last Updated: 1700333762
Views: 1479
Rating: 4.7 / 5 (108 voted)
Reviews: 87% of readers found this page helpful
Name: Gregory Hensley
Birthday: 1918-08-18
Address: 44920 Noah Land Apt. 391, Port Patriciachester, IA 79501
Phone: +4058100409442317
Job: Computer Programmer
Hobby: Hiking, Orienteering, Video Editing, DIY Electronics, Arduino, Graphic Design, Bowling
Introduction: My name is Gregory Hensley, I am a talented, skilled, irreplaceable, priceless, fearless, esteemed, strong-willed person who loves writing and wants to share my knowledge and understanding with you.