ChatGPT raises the specter of AI used as a hacking tool


OpenAI’s ChatGPT conversational artificial intelligence tool is capable of doing many things, with users demonstrating how it can write essays for students and cover letters for job seekers. Cybersecurity researchers have now shown it can also be used to write malware.

In recent years, cybersecurity vendors have used AI in products such as advanced detection and response to look for patterns in attacks and deploy responses. But recent demonstrations from CyberArk and Deep Instinct have shown that ChatGPT can be used to write simple hacking tools, perhaps pointing to a future in which criminal organizations use AI in an arms race with the good guys.

FEDERAL AGENCIES FAILED AT CYBERSECURITY MEASURES, GOVERNMENT WATCHDOG FINDS

OpenAI has designed ChatGPT to reject overt requests to do something unethical. For example, when Deep Instinct threat intelligence researcher Bar Block asked the AI to write a keylogger, ChatGPT said it would not be “appropriate or ethical” to help because keyloggers can be used for malicious purposes.

However, when Block rephrased the request, asking ChatGPT to give an example of a program that records keystrokes, saves them to a text file, and sends the text file to a remote IP address, ChatGPT happily did so. By asking ChatGPT to give an example of a program that takes a list of directories and encrypts the information in them, Block was also able to get ChatGPT to give her an example of ransomware.

However, in both cases, ChatGPT left some work for her to do before getting a functioning piece of malware. It appears “that the bot provided inexecutable code by design,” Block wrote in a blog post.

“While ChatGPT will not build malicious code for the everyday person who has no knowledge of how to execute malware, it does have the potential to accelerate attacks for those who do,” she added. “I believe ChatGPT will continue to develop measures to prevent this, but … there will be ways to ask the questions to get the results you are looking for.”

In coming years, the future of malware creation and detection “will be tangled…

Source…