AI is making its mark on the cybersecurity world.
For defenders, AI can help security teams detect and mitigate threats more quickly. For attackers, weaponized AI can assist with a number of attacks, such as deepfakes, data poisoning and reverse-engineering.
But, lately, it’s AI-powered malware that has come into the spotlight — and had its existence questioned.
AI-enabled attacks vs. AI-powered malware
AI-enabled attacks occur when a threat actor uses AI to assist in an attack. Deepfake technology, a type of AI used to create false but convincing images, audio and videos, may be used, for example, during social engineering attacks. In these situations, AI is a tool to conduct an attack, not create it.
AI-powered malware, on the other hand, is trained via machine learning to be slyer, faster and more effective than traditional malware. Unlike malware that targets a large number of people with the intention of successfully attacking a small percentage of them, AI-powered malware is trained to think for itself, update its actions based on the scenario, and specifically target its victims and their systems.
IBM researchers presented the proof-of-concept AI-powered malware DeepLocker at the 2018 Black Hat Conference to demonstrate this new breed of threat. WannaCry ransomware was hidden in a video conferencing application and remained dormant until a specific face was identified using AI facial recognition software.
Does AI-powered malware exist in the wild?
The quick answer is no. AI-powered malware has yet to be seen in the wild — but don’t rule out the possibility.
“Nobody has been hit with or successfully uncovered a truly AI-powered piece of offense,” said Justin Fier, vice president of tactical risk and response at Darktrace. “It doesn’t mean it’s not out there; we just haven’t seen it yet.”
Pieter Arntz, malware analyst at Malwarebytes, agreed AI-malware has yet to be seen. “To my knowledge, so far, AI is only used at scale in malware circles to improve the effectiveness of existing malware campaigns,” he said in an email to SearchSecurity. He predicted that cybercriminals will continue to use AI to enhance operations, such as targeted spam, deepfakes and social…