Tag Archive for: propaganda

Russian Propaganda on Ukraine Appears in Minecraft and Other Video Games


Russian propaganda is spreading into the world’s video games.

In Minecraft, the immersive game owned by Microsoft, Russian players re-enacted the battle for Soledar, a city in Ukraine that Russian forces captured in January, posting a video of the game on their country’s most popular social media network, VKontakte.

A channel on the Russian version of World of Tanks, a multiplayer warfare game, commemorated the 78th anniversary of the defeat of Nazi Germany in May with a recreation of the Soviet Union’s parade of tanks in Moscow in 1945. On Roblox, the popular gaming platform, a user created an array of Interior Ministry forces in June to celebrate the national holiday, Russia Day.

These games and adjacent discussion sites like Discord and Steam are becoming online platforms for Russian agitprop, circulating to new, mostly younger audiences a torrent of propaganda that the Kremlin has used to try to justify the war in Ukraine.

In this virtual world, players have adopted the letter Z, a symbol of the Russian troops who invaded last year; embraced legally specious Russian territorial claims in Crimea and other places; and echoed President Vladimir V. Putin’s efforts to denigrate Ukrainians as Nazis and blame the West for the conflict.

“Glory to Russia,” declared a video tutorial on how to construct a flagpole with a Russian flag on Minecraft. It showed a Russian flag over a cityscape labeled Luhansk, one of the Ukrainian provinces that Russia has illegally annexed.

“The gaming world is really a platform that can impact public opinion, to reach an audience, especially young populations,” said Tanya Bekker, a researcher at ActiveFence, a cybersecurity company that identified several examples of Russian propaganda on Minecraft for The New York Times.

Microsoft’s president, Brad Smith, disclosed in April that the company’s security teams had identified recent Russian efforts “basically to penetrate some of these gaming communities,” citing examples in Minecraft and in Discord discussion groups. He said Microsoft had advised governments, which he did not name, about them, but he played down their significance.

“In truth, it’s not the No. 1 thing we should worry…

Source…

Viral ChatGPT poses propaganda and hacking risks, researchers warn


Ever since OpenAI’s viral chatbot was unveiled late last year, detractors have lined up to flag potential misuse of ChatGPT by email scammers, bots, stalkers and hackers.

The latest warning is particularly eye-catching: It comes from OpenAI itself. Two of its policy researchers were among the six authors of a new report that investigates the threat of AI-enabled influence operations. (One of them has since left OpenAI.)

“Our bottom-line judgment is that language models will be useful for propagandists and will likely transform online influence operations,” according to a blog accompanying the report, which was published Wednesday morning.

Concerns about advanced chatbots don’t stop at influence operations. Cybersecurity experts warn that ChatGPT and similar AI models could lower the bar for hackers to write malicious code to target existing or newly discovered vulnerabilities. Check Point Software Technologies Ltd., an Israel-based cybersecurity company, said attackers were already musing on hacking forums how to re-create malware strains or dark web marketplaces using the chatbot.

Several cybersecurity experts stressed that any malicious code provided by the model is only as good as the user and the questions asked of it. Still, they said it could help less sophisticated hackers with such things as developing better lures or automating post-exploitation actions. Another concern is if hackers develop their own AI models.

WithSecure, a cybersecurity company based in Helsinki, contends in a new report also out Wednesday that bad actors will soon learn how to game ChatGPT by figuring out how to ask malicious prompts that could feed into phishing attempts, harassment and fake news.

“It’s now reasonable to assume any new communication you receive may have been written with the help of a robot,” Andy Patel, intelligence researcher at WithSecure, said in a statement.

A representative for OpenAI didn’t respond to a request for comment, nor did the researchers for OpenAI who worked on the report on influence operations. The FBI, National Security Agency and National Security Council declined to comment on the risks of such AI-generated models.

Kyle Hanslovan, who used to create…

Source…