Dear enterprise IT: Cybercriminals use AI too

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being.

Elevate your enterprise data technology and strategy at Transform 2021.

In a 2017 Deloitte survey, only 42% of respondents considered their institutions to be extremely or very effective at managing cybersecurity risk. The pandemic has certainly done nothing to alleviate these concerns. Despite increased IT security investments companies made in 2020 to deal with distributed IT and work-from-home challenges, nearly 80% of senior IT workers and IT security leaders believe their organizations lack sufficient defenses against cyberattacks, according to IDG.

Unfortunately, the cybersecurity landscape is poised to become more treacherous with the emergence of AI-powered cyberattacks, which could enable cybercriminals to fly under the radar of conventional, rules-based detection tools. For example, when AI is thrown into the mix, “fake email” could become nearly indistinguishable from trusted contact messages. And deepfakes — media that takes a person in an existing image, audio recording, or video and replaces them with someone else’s likeness using AI — could be employed to commit fraud, costing companies millions of dollars.

The solution could lie in “defensive AI,” or self-learning algorithms that understand normal user, device, and system patterns in an organization and detect unusual activity without relying on historical data. But the road to widespread adoption could be long and winding as cybercriminals look to stay one step ahead of their targets.

What are AI-powered cyberattacks?

AI-powered cyberattacks are conventional cyberattacks augmented with AI and machine learning technologies. Take phishing, for example — a type of social engineering where an attacker sends a message designed to trick a human into revealing sensitive information or installing malware. Infused with AI, phishing messages can be personalized to target high-profile employees at enterprises (like members of the C-suite) in a practice known as “spear phishing.”

Imagine an adversarial group attempting to impersonate board members or send fake invoices claiming to come from familiar suppliers. Sourcing a machine learning language model capable of generating…