How AI is powering the next-generation of cybercriminals


The pace of artificial intelligence (AI) adoption by businesses is increasing. However, the technology is also being rapidly embraced by cybercriminals.

Keen to improve the effectiveness of their malicious attacks, cybercriminals are using AI tools in a range of innovative ways to make attacks both more effective and less obvious to detect.

Creating malware and phishing messages

It’s clear that cybercriminals are already making use of generative AI tools to improve the success rates of their attacks. Some are creating new types of malware without the need for sophisticated coding skills.

In some cases, ChatGPT is being used to mutate malware code, allowing it to evade endpoint detection and response (EDR) systems. As a result, major AI service providers have now put in place filters that prevent users from directing them to write malware and assist with other malicious activity.

However, generative AI services such as ChatGPT can still be tricked into writing attack tools. If someone asks ChatGPT to write a script to test their company’s servers for a specific vulnerability, it may comply. Attackers could use a similar tactic to generate code.

Aside from the well-known generative AI tools, cybercriminals also have access to several other AI applications available on the dark web – for a price. One example is WormGPT, which has been described as being like ChatGPT but with no ethical boundaries. These types of tools have no guardrails in place to prevent cybercriminals from using them to write effective malware code and other hostile tools. 

There is also evidence that attackers are using generative AI to automate the task of writing phishing emails and smishing texts. Previously, these have tended to be relatively easy to spot as they often contain poor grammar and misspellings. Now, with AI, attackers can generate highly personalised phishing emails and fraudulent SMS messages using text that seems to be more genuine. As a result, the number of messages that are opened by recipients is likely to increase.

Thankfully, as with the creation of malware, commonly used AI tools such as ChatGPT and Google Bard will decline to write phishing emails. However, attackers…

Source…