Meta warns of ChatGPT malware on Facebook – Global Village Space


AI Tools: The New Weapon for Malware Attacks

Artificial Intelligence (AI) has become a buzzword in the tech industry, and it seems that everyone is obsessed with it, including hackers. In a recent security report released by Facebook’s parent company, Meta, the company’s security team has been tracking new malware threats that weaponize the current AI trend.

Meta claims that it has discovered “around ten new malware families” that are using AI chatbot tools like OpenAI’s popular ChatGPT to hack into users’ accounts. One of the more pressing schemes, according to Meta, is the proliferation of malicious web browser extensions that appear to offer ChatGPT functionality. Users download these extensions for Chrome or Firefox, for example, in order to use AI chatbot functionality. Some of these extensions even work and provide the advertised chatbot features. However, the extensions also contain malware that can access a user’s device.

According to Meta, it has discovered more than 1,000 unique URLs that offer malware disguised as ChatGPT or other AI-related tools and has blocked them from being shared on Facebook, Instagram, and Whatsapp. Once a user downloads malware, bad actors can immediately launch their attack and are constantly updating their methods to get around security protocols. In one example, bad actors were able to quickly automate the process which takes over business accounts and provides advertising permissions to these bad actors.

Meta says it has reported the malicious links to the various domain registrars and hosting providers that are used by these bad actors. However, this is just the tip of the iceberg. Hackers are constantly evolving their tactics and using AI tools to make their attacks more sophisticated and harder to detect.

The use of AI in malware attacks is not new. In fact, it has been around for some time now. Hackers have been using machine learning algorithms to create more effective malware that can evade traditional security measures. They can also use AI to automate their attacks, making them faster and more efficient.

One of the most significant risks associated with AI-powered malware is that it can learn and…

Source…