3 Ways Hackers Use ChatGPT to Cause Security Headaches


With ChatGPT making headlines everywhere, it feels like the world has entered a Black Mirror episode. While some argue artificial intelligence will be the ultimate solution to our biggest cybersecurity issues, others say it will introduce a whole slew of new challenges.

I’m on the side of the latter. While I recognize that ChatGPT is an amazing piece of technology, it is also an enabler for hackers, commoditizing nation-state capabilities for the benefit of the “script kiddies” — aka unsophisticated hackers. In addition to writing text, the technology opens up a scary scenario where a computer can be guided to look for information within images that humans can’t immediately pick up but machines are sensitive enough to see. Examples would be reflections of passwords on glass, or people who appear in photos that would not appear in them without the help of AI.

As ChatGPT adoption grows, I believe the industry needs to proceed with caution, and here’s why. There are three types of capabilities hackers can use ChatGPT for: mass phishing, reverse engineering, and smart malware. Let’s take a look at each one of these in detail.

Mass Phishing

Because ChatGPT is so powerful, it can reduce the amount of time it takes to create handcrafted, personalized emails to a list of people from a few days to just minutes. And with just the click of a button, ChatGPT can answer very specific questions and use its knowledge to impersonate both security and non-security personnel experts. Because ChatGPT can also translate text into any style of writing or proofread at a very high level, once a list of employees and their details are attained, it’s easy to mass create emails where a hacker is pretending to be someone else to increase the chances of a successful attack.

Phishing is an essential part of hacking organizations, whether it be to gain access to the servers of an organization or to attempt to convince people to transfer money. To combat this, business leaders must educate employees on the security implications of ChatGPT and how to spot potential attacks. I think employees should be especially critical of text and never assume something is coming from an authentic source. Instead of just blindly…

Source…