According to Researchers, Google’s Bard Presents a Ransomware Threat / Digital Information World


The introduction of AI is revolutionary in and of itself. But with such a rapidly evolving technology accessible to common folks, the chances of users exploiting it for unethical and fraudulent purposes are high. Google’s AI chatbot, Bard, is reported to willingly produce harmful phishing emails when given prompts. By tweaking the wording of those prompts in a specific manner, Bard even can generate basic ransomware code. Check Point, a cybersecurity firm, stated that Bard has gone beyond its competitor, ChatGPT when it comes to cybersecurity.

In light of recent worries regarding the potential misuse of OpenAI’s large-language model in generating harmful programs and threats, Check Point conducted a research proceeding with absolute caution. ChatGPT has enhanced security measures in comparison to Google’s Bard, which has yet to reach that level of security.

Check Point’s researchers gave both ChatGPT and Bard identical prompts. Upon querying for phishing emails were refused by both AI programs. But the findings showcased the difference between both AI programs — ChatGPT explicitly stated that engaging in such activities was considered fraudulent, Bard, on the other hand, claimed that it could not fulfill the request. Furthermore, results showed that ChatGPT continued to decline their request when prompted for a particular type of phishing email, while Bard began providing a well-written response.

However, both Bard and ChatGPT firmly refused when Check Point prompted them both to write a harmful ransomware code. They both declined no matter what, despite their attempts at tweaking the wording a bit by telling the AI programs that it was just for security purposes. But it didn’t take the researchers that long to get around Bard’s security measures. They instructed the AI model to describe common behaviours performed by ransomware, and results showed that Bard had spurted out an entire array of malicious activities in response.

Subsequently, the team went further to append the list of ransomware functions generated by the AI model. They asked it to provide a code to do certain tasks, but Bard’s security was foolproof and claimed it could not proceed with such a…

Source…