Should we be worried about self-learning malware?


Could we be just a few years away from self-learning malware being a credible threat to businesses? According to CCS Insight, the answer is yes. In its predictions for 2021 and beyond, the analyst firm forecast that self-learning malware will cause a major security breach on or before 2024.

Self-learning, adaptive malware isn’t something new, but to date has been largely confined to lab environments and hackathons. Some of the earliest examples of self-propagating malware were able to ‘learn’ about their environment. 

For example, the Morris Worm of 1988 learnt of other computers to compromise from the systems that it infected, notes Martin Lee, a member of the Institution of Engineering and Technology’s (IET) Cybersecurity and Safety Committee and a Cisco employee.

“It was also aware if it was re-infecting a system that had already been infected, and would refuse to run, most of the time, if it learnt another copy of itself was already present.”

“In more recent years we’ve seen malware such as Olympic Destroyer discover the usernames and passwords on a system and append these to its own source code in order to increase the efficiency of subsequent attempts to compromise systems,” he continues. “By adding its own source code as it jumps between systems, it can be thought of as memorising credentials to help in its own success.”

The difference between automation and evolution

Anna Chung, a principal researcher at Unit 42 – Palo Alto Network’s global threat intelligence team – notes that it’s important to highlight the differences between automated hacking tools and AI or self-learning malware, however. “There are many automated hacking tools in the world. Their function is to execute specific and repetitive tasks based on pre-set rules, but they cannot evolve by themselves.”

“Most threats are controlled and guided by actors based on what information is gleaned and relayed to them. There is little evidence that malware is ‘self-learning’,” adds her colleague Alex Hinchliffe, threat intelligence analyst. 

He says the closest thing Unit 42’s seen to this concept was Stuxnet; not from an AI point of view, but from an autonomous software…

Source…