Tag Archive for: microsoft

Microsoft and OpenAI Sound the Alarm


Generative AI, a rapidly advancing technology, is increasingly becoming a tool of choice for offensive cyber operations by U.S. rivals. Microsoft and OpenAI have sounded the alarm about this disturbing trend, highlighting its potential to create sophisticated and hard-to-detect cyber attacks that could pose significant threats to national security. Traditional cybersecurity measures may struggle to counter such AI-driven threats, underscoring the urgent need for enhanced cybersecurity measures and preparedness.

Generative AI in Offensive Cyber Operations

Microsoft and OpenAI have detected and disrupted the malicious use of AI technologies for offensive cyber operations by U.S adversaries, including Iran, North Korea, Russia, and China. The adversaries have utilized generative AI for various purposes, such as social engineering, phishing, and researching technologies related to warfare.

Generative AI is expected to enhance malicious social engineering leading to more sophisticated deepfakes and voice cloning. Critics have raised concerns about the hasty public release of large-language models and the need for increased focus on making them more secure.

The Role of Large Language Models

The use of large language models, such as OpenAI’s ChatGPT, has led to an increase in sophisticated deepfakes, voice cloning, and other malicious social engineering tactics. Cybersecurity firms have long used machine learning for defense, but offensive hackers are now also utilizing it. Microsoft, which has invested billions in OpenAI, has reported that generative AI is expected to enhance malicious social engineering.

Notably, the North Korean cyberespionage group known as Kimsuky, Iran’s Revolutionary Guard, the Russian GRU military intelligence unit known as Fancy Bear, and Chinese cyberespionage groups have all used generative AI in various ways to conduct offensive cyber operations. Critics argue that Microsoft’s creation and selling of tools to address vulnerabilities in large language models may be contributing to the problem, and that more secure foundation models should be created instead.

Microsoft and OpenAI’s Response

Microsoft and OpenAI have collaborated to publish research on…

Source…

Microsoft Discovers State-backed Hackers From China, Russia, and Iran Are Using OpenAI Tools for Honing Skills


A new study from Microsoft and OpenAI has revealed that AI tools such as ChatGPT and other Large Language Models (LLM) are being used by several hacking groups from Russia, China, Iran, and North Korea to increase hacking productivity and fraud schemes, prompting the tech giant to ban its AI tools to all state-backed hacking groups.

The study, which was reportedly branded as the first time an AI company had disclosed cybersecurity concerns from threat actors using AI, discovered five threat actors, two of whom were linked to China and one each with Russia, Iran, and North Korea.

According to reports, most hacker groups employed LLMs or OpenAI technologies to create phishing emails, automate computer programming and coding skills, and comprehend various subjects. It has also been discovered that a small group of threat actors with ties to China employ LLMs for translation and improved target communication.

The study found that Charcoal Typhoon, a threat actor associated with China, utilized artificial intelligence (AI) to facilitate communication and translation with targeted individuals or organizations, comprehend particular technologies, optimize program scripting techniques for automation, and simplify operational commands.

OpenAI Holds Its First Developer Conference

(Photo : Justin Sullivan/Getty Images)
SAN FRANCISCO, CALIFORNIA – NOVEMBER 06: Microsoft CEO Satya Nadella speaks during the OpenAI DevDay event on November 06, 2023 in San Francisco, California. OpenAI CEO Sam Altman delivered the keynote address at the first ever Open AI DevDay conference.

Salmon Typhoon, another threat actor with ties to China, is allegedly utilizing AI to translate technical papers and computing jargon, find coding mistakes, write harmful code, and better grasp various subjects related to public domain research. 

It was also discovered that the Russian state-sponsored hacker collective Forest Blizzard employed LLMs to learn more about specific satellite capabilities and scripting methods for complex computer programs. According to reports, the group has claimed victims who are essential to the Russian government, such as groups involved in the conflict between Russia and…

Source…

Microsoft patches two zero-days for Valentine’s Day


Microsoft has patched two actively exploited zero-day vulnerabilities in its February Patch Tuesday – a pair of security feature bypasses affecting Internet Shortcut Files and Windows SmartScreen respectively – out of a total of just over 70 vulnerabilities disclosed in the second drop of 2024.

Among some of the more pressing issues this month are critical vulnerabilities in Microsoft Dynamics, Exchange Server, Office, and Windows Hyper-V and Pragmatic General Multicast, although none of these flaws are being used in the wild quite yet.

Water Hydra

The first of the two zero-days is tracked as CVE-2024-21412 and was found by Trend Micro researchers. It appears to be being used to target foreign exchange traders specifically by a group tracked as Water Hydra.

According to Trend Micro, the cyber criminal gang is leveraging CVE-2024-21412 as part of a wider attack chain in order to bypass SmartScreen and deliver a remote access trojan (RAT) called DarkMe, likely as a precursor to future attacks, possibly involving ransomware.

“CVE-2024-21412 represents a critical vulnerability characterised by sophisticated exploitation of the Microsoft Defender SmartScreen through a zero-day flaw,” explained Saeed Abbasi, product manager for vulnerability research at the Qualys Threat Research Unit.

“This vulnerability is exploited via a specially crafted file delivered through phishing tactics, which cleverly manipulates internet shortcuts and WebDAV components to bypass the displayed security checks.

“The exploitation requires user interaction, attackers must convince the targeted user to open a malicious file, highlighting the importance of user awareness alongside technical defences. The impact of this vulnerability is profound, compromising security and undermining trust in protective mechanisms like SmartScreen,” said Abbasi.

The second zero-day, tracked as CVE-2024-21351, is remarkably similar to the first in that ultimately, it impacts the SmartScreen service. In this case, however, it enables an attacker to get around the checks that it conducts for the so-called Mark-of-the-Web (MotW) that indicates whether a file can be trusted or not, and execute their own code.

“This…

Source…

Microsoft reveals how Iran, North Korea, China, and Russia are using AI for cyber war


Microsoft has revealed that US adversaries — primarily Iran and North Korea, with lesser involvement from Russia and China —- are increasingly employing generative artificial intelligence (AI) for mounting offensive cyber operations. These adversaries have begun leveraging AI technology to orchestrate attacks, and Microsoft, in collaboration with business partner ChatGPT maker OpenAI, has detected and thwarted these threats.

In a blog post, the Redmond-based company emphasized that while these techniques were still in their “early-stage,” they were neither “particularly novel nor unique.” Nevertheless, Microsoft deemed it crucial to publicly expose them. As US rivals harness large-language models to expand their network-breaching capabilities and conduct influence operations, transparency becomes essential.

For years, cybersecurity firms have utilized machine learning for defense, primarily to identify anomalous behavior within networks. However, malicious actors—both criminals and offensive hackers—have also embraced this technology. The introduction of large-language models, exemplified by OpenAI’s ChatGPT, has elevated the game of cat-and-mouse in the cybersecurity landscape.

Microsoft’s substantial investment in OpenAI aligns with its commitment to advancing AI research. The announcement coincided with the release of a report highlighting the potential impact of generative AI on malicious social engineering. As we approach a year with over 50 countries conducting elections, the threat of disinformation looms large, exacerbated by the sophistication of deepfakes and voice cloning.

Here are specific examples that Microsoft provided. The company said that it has disabled generative AI accounts and assets associated with named groups:

North Korea: The North Korean cyberespionage group known as Kimsuky has used the models to research foreign think tanks that study the country, and to generate content likely to be used in spear-phishing hacking campaigns.

Iran: Iran’s Revolutionary Guard has used large-language models to assist in social engineering, in troubleshooting software errors, and even in studying how intruders might evade detection in a compromised network….

Source…