GPT-4 kicks AI security risks into higher gear
Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
As Arthur C. Clarke once put it, any sufficiently advanced technology is “indistinguishable from magic.”
Some might say this is true of ChatGPT, too — including, if you will, black magic.
Immediately upon its launch in November, security teams, pen testers and developers began discovering exploits in the AI chatbot — and those continue to evolve with its newest iteration, GPT-4, released earlier this month.
“GPT-4 won’t invent a new cyberthreat,” said Hector Ferran, VP of marketing at BlueWillow AI. “But just as it is being used by millions already to augment and simplify a myriad of mundane daily tasks, so too could it be used by a minority of bad actors to augment their criminal behavior.”
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
Evolving technologies, threats
In January, just two months after launch, ChatGPT reached 100 million users — setting a record for the fastest user growth of an app. And as it has become a household name, it is also a shiny new tool for cybercriminals, enabling them to quickly create tools and deploy attacks.
Most notably, the tool is being used to generate programs that can be used in malware, ransomware and phishing attacks.
BlackFog, for instance, recently asked the tool to create a PowerShell attack in a “non-malicious” way. The script was generated quickly and was ready to use, according to researchers.
CyberArk, meanwhile, was able to bypass filters to create polymorphic malware, which can repeatedly mutate. CyberArk also used ChatGPT to mutate code that became highly evasive and difficult to detect.
And, Check Point Research was able to use ChatGPT to create a convincing spear-phishing attack. The company’s…