How Artificial Intelligence Is Changing Cyber Threats


Person looking at a visualization of an interconnected big data structure.
Image: NicoElNino/Adobe Stock

HackerOne, a security platform and hacker community forum, hosted a roundtable on Thursday, July 27, about the way generative artificial intelligence will change the practice of cybersecurity. Hackers and industry experts discussed the role of generative AI in various aspects of cybersecurity, including novel attack surfaces and what organizations should keep in mind when it comes to large language models.

Jump to:

Generative AI can introduce risks if organizations adopt it too quickly

Organizations using generative AI like ChatGPT to write code should be careful they don’t end up creating vulnerabilities in their haste, said Joseph “rez0” Thacker, a professional hacker and senior offensive security engineer at software-as-a-service security company AppOmni.

For example, ChatGPT doesn’t have the context to understand how vulnerabilities might arise in the code it produces. Organizations have to hope that ChatGPT will know how to produce SQL queries that aren’t vulnerable to SQL injection, Thacker said. Attackers being able to access user accounts or data stored across different parts of the organization often cause vulnerabilities that penetration testers frequently look for, and ChatGPT might not be able to take them into account in its code.

The two main risks for companies that may rush to use generative AI products are:

  • Allowing the LLM to be exposed in any way to external users that have access to internal data.
  • Connecting different tools and plugins with an AI feature that may access untrusted data, even if it’s internal.

How threat actors take advantage of generative AI

“We have to remember that systems like GPT models don’t create new things — what they do is reorient stuff that already exists … stuff it’s already been trained on,” said Klondike. “I think what we’re going to see is people who aren’t very technically skilled will be able to have access to their own GPT models that can teach them about the code or help them build ransomware that already exists.”

Prompt injection

Anything that browses the internet — as an LLM can do — could create this kind of problem.

One possible avenue of cyberattack on…

Source…