ChatGPT is being used to create malware — what you need to know
As ChatGPT, Bing Chat and Google Bard continue to take the world by storm, cybersecurity experts have voiced their concerns about potential threats posed by AI.
And it appears these concerns are increasingly valid as malware has already been created using ChatGPT. As reported by Infosecurity, WithSecure CEO Juhani Hintikka has confirmed to the news outlet that malware samples generated by ChatGPT have been spotted in the wild.
Just as ChatGPT can provide different answers to the same question, it can also generate variations on a piece of code. Apparently, this is what the hackers abusing the AI chatbot did to create malware.
By feeding ChatGPT existing malware samples, hackers can have it create new malware strains that are polymorphic. As WithSecure’s head of threat analysis Tim West pointed out to Infosecurity, this will make it particularly challenging to defend against these new threats.
While we now know that ChatGPT has been used to create malware, we don’t know much else on the matter yet, including how dangerous this malware is and whether or not it’s currently being used in cyberattacks.
In order to bypass the defenses of Google, Microsoft and other tech giants, hackers often find clever ways to abuse legitimate tools. For instance, remote access tools are frequently used by hackers in their attacks, and it now appears like they too have jumped on the AI bandwagon.
As Hintikka points out, AI has traditionally been used by antivirus companies and other defenders to fend off cyberattacks. However, this appears to be changing as cybercriminals now have more resources at their disposal.
Besides answering your most pressing questions, ChatGPT can be used for coding. In fact, the chatbot can write code for you which “lowers the barrier for entry for the threat actors to develop malware,” according to West. While hackers can currently buy pre-built and custom malware on the dark web, generative AI provides them with the tools they need to cut out the middleman and create new malware on their own.
At the same time, hackers are already using AI to craft their phishing emails. So far, humans have been able to identify these AI-crafted phishing attempts but as AI becomes…