Microsoft and OpenAI Sound the Alarm


Generative AI, a rapidly advancing technology, is increasingly becoming a tool of choice for offensive cyber operations by U.S. rivals. Microsoft and OpenAI have sounded the alarm about this disturbing trend, highlighting its potential to create sophisticated and hard-to-detect cyber attacks that could pose significant threats to national security. Traditional cybersecurity measures may struggle to counter such AI-driven threats, underscoring the urgent need for enhanced cybersecurity measures and preparedness.

Generative AI in Offensive Cyber Operations

Microsoft and OpenAI have detected and disrupted the malicious use of AI technologies for offensive cyber operations by U.S adversaries, including Iran, North Korea, Russia, and China. The adversaries have utilized generative AI for various purposes, such as social engineering, phishing, and researching technologies related to warfare.

Generative AI is expected to enhance malicious social engineering leading to more sophisticated deepfakes and voice cloning. Critics have raised concerns about the hasty public release of large-language models and the need for increased focus on making them more secure.

The Role of Large Language Models

The use of large language models, such as OpenAI’s ChatGPT, has led to an increase in sophisticated deepfakes, voice cloning, and other malicious social engineering tactics. Cybersecurity firms have long used machine learning for defense, but offensive hackers are now also utilizing it. Microsoft, which has invested billions in OpenAI, has reported that generative AI is expected to enhance malicious social engineering.

Notably, the North Korean cyberespionage group known as Kimsuky, Iran’s Revolutionary Guard, the Russian GRU military intelligence unit known as Fancy Bear, and Chinese cyberespionage groups have all used generative AI in various ways to conduct offensive cyber operations. Critics argue that Microsoft’s creation and selling of tools to address vulnerabilities in large language models may be contributing to the problem, and that more secure foundation models should be created instead.

Microsoft and OpenAI’s Response

Microsoft and OpenAI have collaborated to publish research on…

Source…