Tag Archive for: openAI

OpenAI Security Head Suggests ChatGPT Can Decrypt Russian Hacking Group Conversations in Pentagon Event


ChatGPT‘s latest military use proves to be conversation decryption between hackers, as per OpenAI’s head of security, Matthew Knight, in the Pentagon‘s Advantage DoD 2024 event. Knight reportedly explained that the chatbot could decipher a cryptic conversation within a Russian hacking group, first reported by the Washington Post.

As explained by Knight, deciphering the conversation was a task that even their Russian linguist had difficulty with, but he claims that GPT-4 succeeded in doing so. The conversations between the hackers were reportedly in “Russian shorthand internet slang.” The showcase comes as a part of the Pentagon’s AI symposium showcasing viable uses of AI in the military.

Microsoft-Backed OpenAI Hits $80 Billion Valuation in Groundbreaking Deal

(Photo : MARCO BERTORELLO/AFP via Getty Images)
A photo taken on October 4, 2023 in Manta, near Turin, shows a smartphone and a laptop displaying the logos of the artificial intelligence OpenAI research laboratory and ChatGPT robot.

Panel discussions at the symposium feature representatives from well-known tech companies besides OpenAI’s Knight, such as Dr. Scott Papson, Principal Solutions Architect of Amazon Web Services, and Dr. Billie Rinaldi, Responsible AI Division Lead of Microsoft’s Strategic Missions and Technologies Division.

The event proves to be a glimpse into the future uses of AI in the military. One was hinted at by the chief technology officer of Palantir Technologies and Pentagon contractor, Shyam Sankar. Samkar comments that using ChatGPT as a chatbot is a “dead end,” further noting that the technology will likely be used for developers and not for end users. 

Read Also: China, Russia Agree to Coordinate AI Use in Military Technology 

GPT-4 Uses on Military Intelligence

This is not the first time GPT-4’s use for deciphering cryptic messages was discovered, as a Microsoft Study claimed that similar practices have long been employed by state-backed hackers.

The study found that two hacking groups with ties to China are using AI to translate communication with targeted individuals or organizations as well as translate computer jargon and technical publications. 

AI Military Use Concerns

The event also saw industry…

Source…

Microsoft and OpenAI Sound the Alarm


Generative AI, a rapidly advancing technology, is increasingly becoming a tool of choice for offensive cyber operations by U.S. rivals. Microsoft and OpenAI have sounded the alarm about this disturbing trend, highlighting its potential to create sophisticated and hard-to-detect cyber attacks that could pose significant threats to national security. Traditional cybersecurity measures may struggle to counter such AI-driven threats, underscoring the urgent need for enhanced cybersecurity measures and preparedness.

Generative AI in Offensive Cyber Operations

Microsoft and OpenAI have detected and disrupted the malicious use of AI technologies for offensive cyber operations by U.S adversaries, including Iran, North Korea, Russia, and China. The adversaries have utilized generative AI for various purposes, such as social engineering, phishing, and researching technologies related to warfare.

Generative AI is expected to enhance malicious social engineering leading to more sophisticated deepfakes and voice cloning. Critics have raised concerns about the hasty public release of large-language models and the need for increased focus on making them more secure.

The Role of Large Language Models

The use of large language models, such as OpenAI’s ChatGPT, has led to an increase in sophisticated deepfakes, voice cloning, and other malicious social engineering tactics. Cybersecurity firms have long used machine learning for defense, but offensive hackers are now also utilizing it. Microsoft, which has invested billions in OpenAI, has reported that generative AI is expected to enhance malicious social engineering.

Notably, the North Korean cyberespionage group known as Kimsuky, Iran’s Revolutionary Guard, the Russian GRU military intelligence unit known as Fancy Bear, and Chinese cyberespionage groups have all used generative AI in various ways to conduct offensive cyber operations. Critics argue that Microsoft’s creation and selling of tools to address vulnerabilities in large language models may be contributing to the problem, and that more secure foundation models should be created instead.

Microsoft and OpenAI’s Response

Microsoft and OpenAI have collaborated to publish research on…

Source…

Microsoft Discovers State-backed Hackers From China, Russia, and Iran Are Using OpenAI Tools for Honing Skills


A new study from Microsoft and OpenAI has revealed that AI tools such as ChatGPT and other Large Language Models (LLM) are being used by several hacking groups from Russia, China, Iran, and North Korea to increase hacking productivity and fraud schemes, prompting the tech giant to ban its AI tools to all state-backed hacking groups.

The study, which was reportedly branded as the first time an AI company had disclosed cybersecurity concerns from threat actors using AI, discovered five threat actors, two of whom were linked to China and one each with Russia, Iran, and North Korea.

According to reports, most hacker groups employed LLMs or OpenAI technologies to create phishing emails, automate computer programming and coding skills, and comprehend various subjects. It has also been discovered that a small group of threat actors with ties to China employ LLMs for translation and improved target communication.

The study found that Charcoal Typhoon, a threat actor associated with China, utilized artificial intelligence (AI) to facilitate communication and translation with targeted individuals or organizations, comprehend particular technologies, optimize program scripting techniques for automation, and simplify operational commands.

OpenAI Holds Its First Developer Conference

(Photo : Justin Sullivan/Getty Images)
SAN FRANCISCO, CALIFORNIA – NOVEMBER 06: Microsoft CEO Satya Nadella speaks during the OpenAI DevDay event on November 06, 2023 in San Francisco, California. OpenAI CEO Sam Altman delivered the keynote address at the first ever Open AI DevDay conference.

Salmon Typhoon, another threat actor with ties to China, is allegedly utilizing AI to translate technical papers and computing jargon, find coding mistakes, write harmful code, and better grasp various subjects related to public domain research. 

It was also discovered that the Russian state-sponsored hacker collective Forest Blizzard employed LLMs to learn more about specific satellite capabilities and scripting methods for complex computer programs. According to reports, the group has claimed victims who are essential to the Russian government, such as groups involved in the conflict between Russia and…

Source…

OpenAI Cyberattack Claimed By Anonymous Sudan


The hacker group Anonymous Sudan has declared an explicit cyber assault on OpenAI, a prominent artificial intelligence research lab. In a Telegram post, the hacker collective shared details about the OpenAI Cyberattack, demanding the dismissal of Tal Broda, the Head of the Research Platform at OpenAI, accusing him of supporting genocide.

The hackers continue to pose a threat to ChatGPT, vowing to sustain their attacks until their demands are met, specifically regarding Tal Broda and alleged dehumanizing views on Palestinians.

Open AI Cyberattack

The Cyber Express Team initiated contact with OpenAI officials to verify the claims made by Anonymous Sudan. As of the time of reporting, no official response has been received from OpenAI.

In an attempt to independently verify the OpenAI cyberattack, our team accessed the official OpenAI website and ChatGPT, finding both to be functioning properly. This raises questions about the credibility of the hacker group’s claims, leaving room for speculation about their true motives.

OpenAI Cyberattack: Past Incidents Cast Doubt on Current Claims

Looking back to November 2023, OpenAI faced a similar situation when Anonymous Sudan, in collaboration with “Skynet,” claimed responsibility for a Distributed Denial of Service (DDoS) attack on OpenAI’s login portal. Users encountered difficulties logging into ChatGPT portals, leading to concerns raised on social media platforms.

While the login issues were initially attributed to an internal software glitch, the current OpenAI cyberattack claim by Anonymous Sudan raises doubts about the possibility of a recurring cyber threat.

Sam Altman’s Return and Immediate Plans for OpenAI

Amid these challenges, Sam Altman, the CEO of OpenAI, was fired in November. To everyone’s surprise, he has now made a comeback to his leadership position.

OpenAI Cyberattack

Altman announced the formation of a new initial board, consisting of Bret Taylor as Chair, Larry Summers, and Adam D’Angelo.

“I am returning to OpenAI as CEO. Mira will return to her role as CTO. The new initial board will consist of Bret Taylor (Chair), Larry Summers, and Adam D’Angelo,” reads the official Statement.

In addition to this announcement, Altman also outlined…

Source…