Beware of AI-enhanced Cyberattacks – CEPA


Artificial intelligence can increase the quality and speed of cyberattacks. But AI also can improve our defenses.

When hackers first sent phishing emails in the 1990s, their technique was laborious, requiring them to click over and over to deliver their fake emails. Messages included requests for users to enter information on a webpage that delivered the victim’s login credentials. Today, AI-enhanced phishing attacks increase the speed and scale of cyberattacks, searching out targets, automatically dispatching millions of customized emails within minutes — and dangerously, searching for new targets in the US and abroad.

AI personalizes. The software analyzes social networks, breaches, and public records to generate convincing messages that appear to come from trusted colleagues, friends, or reputable organizations.

 While this AI-powered security threat is immense, AI also offers an opportunity to strengthen cyber defenses. A strong legal framework is required to respond. Surprisingly, the US is ahead of Europe in regulations and policies to govern cyber operations around national security.

AI-enhanced cyberattacks represent an evolution in the long history of cyberattack automation. AI disseminates malicious software across networks or devices, expediting the theft of sensitive data from compromised systems. Automated credential stuffing tests millions of stolen usernames and password combinations against multiple online login pages, enabling rapid account takeover at speed and scale.

 The same power that allows the machine to execute actions or learn by themselves makes them difficult to control. Consider the ‘paperclip maximizer,” a thought experiment introduced by philosopher Nick Bostrom. A hypothetical AI-powered computer is given the sole objective of manufacturing as many paper clips as possible. It pursues this narrow goal, allocating all available resources to it, including those necessary for human survival, leading to catastrophic consequences.

The thought experiment underlines the dangers of AI cyber automation: a seemingly harmless objective could lead to an unintended outcome. COMPAS, a software used by US courts to…

Source…