Tag Archive for: deepfakes

Deepfakes: When seeing is no longer believing


Deepfakes: When seeing is no longer believing | Security Magazine




Source…

Ransomware, email compromise are top security threats, but deepfakes increase


While ransomware and business email compromise (BEC) are leading causes of security incidents for businesses, geopolitics and deepfakes are playing an increasing role, according to reports from two leading cybersecurity companies.

VMware’s 2022 Global Incident Threat Response Report shows a steady rise in  extortionary ransomware attacks and BEC, alongside fresh jumps in deepfakes and zero-day exploits.

A report based on cases involving clients of Palo Alto Unit 42’s threat analysis team echoed VMware’s findings, highlighting that 70% of security incidents in the 12 months from May 2021 to April 2022 can be attributed to ransomware and BEC attacks.

VMware, in its annual survey of 125 cybersecurity and incident response professionals, noted that geopolitical conflicts caused incidents with 65% of respondents, confirming an increase in cyberattacks since the Russian invasion of Ukraine.

Deepfakes, zero-days, API hacks emerge as threats

Deepfake technology—AI tools used to create convincing images, audio, and video hoaxes— is increasingly being used for cybercrime, after previously being used mainly for disinformation campaigns, according to VMware. Deepfake attacks, mostly associated with nation-state actors, shot up 13% year over year as 66% of respondents reported at least one incident.

Email was reported to be the top delivery method (78%) for these attacks, in sync with a general rise in BEC. From 2016 to 2021, according to the VMware report, BEC compromise incidents cost organizations an estimated $43.3 billion.

Source…

The threat of automated hacking, deepfakes and weaponised AI


Vishal Salvi, chief information security officer & head of cyber security practice at Infosys, discusses the threat of automated hacking, deepfakes and weaponised AI Automated hacking, deepfakes and weaponised AI – how much of a threat are they? image

AI has been deployed in a number of ways by threat actors in recent times.

It is a vexing paradox that while emerging cyber technologies provide valuable benefits, their malicious use in the form of automated hacking, deepfakes, and weaponised artificial intelligence (AI), among others, prove a threat. Along with existing threats such as ransomware, botnets, phishing, and denial of service attacks, they make information security hard to maintain.

It will become even more challenging as more devices and systems get internet-connected, massive amounts of data that needs securing are generated, and newer technologies such as the Internet of Things and 5G gain ground. The democratisation of powerful computing technologies, such as distributed computing and the public cloud, only accentuates the issue.

Indeed, cyber threats can become a major, enduring threat to the world, says the World Economic Forum.

How real the threat is can be gleaned from the formation of the European Union’s (EU) law enforcement agency, Europol’s Joint Cybercrime Action Taskforce, facilitating cross-border collaboration to combat cyber crime by 16 EU member countries, as well as the U.S., Canada, and Australia, among others.

A Forrester study said 88% of respondents believe offensive AI is inevitable, with nearly half of them expecting AI-based attacks within the next year. With AI-powered attacks on the horizon, the study notes it “will be crucial to use AI as a force multiplier.”

Automated hacking

Increasing automation, a reality of the modern age, provides advantages such as speed, accuracy, and relief from monotonous tasks. Perversely, it has also sparked off automated hacking or hacking on an industrial scale in the form of multiple and more ‘efficient’ attempts that can cause massive financial losses and destroy the organisational reputation. They are completely automated, from reconnaissance to attack orchestration, and speedily executed, leaving little time for…

Source…

Deepfakes: The Next Big Threat


A number of mobile apps give anyone with a smartphone and a few minutes of time on their hands the ability to create and distribute a deepfake video. All it takes is a picture of, say, yourself that you’d swap with an actor in a movie or a television show. The apps do the hard part by recognizing the facial structure of the actor, so when your image is added to the movie or show, it is a pretty seamless recreation.

Chances are no one will actually mistake you for Brad Pitt or Reese Witherspoon, but what these apps—downloadable from the Apple App Store or Google Play—do is show how simple it is for the average person to make a fake image look legitimate. And while these apps are meant for entertainment purposes, deepfakes are becoming a new category of cybercrime that are not just a problem for networks and data, but could also have a life-or-death impact.

The potential for deepfakes in cybercrime is dire enough that the FBI released a warning in March 2021, stating “Foreign actors are currently using synthetic content in their influence campaigns, and the FBI anticipates it will be increasingly used by foreign and criminal cyber actors for spearphishing and social engineering in an evolution of cyber operational tradecraft.”

During a webinar offered by Cato Networks, Raymond Lee, CEO of FakeNet.AI, and Etay Maor, senior director of security strategy at Cato Networks, showed photos and played audio recordings that were of both real people and fakes, proving how difficult it is to tell fact from fiction.

Showing Up on the Dark Web

With increasing frequency, deepfakes are showing up on the dark web; a clear sign that threat actors see the technology as a promising new income stream. There is a burgeoning marketplace for products that create deepfakes, and within dark web chatrooms there are conversation threads dedicated to outlining the best methods for creating deepfakes for use in cybercrime. There is growing interest in deepfakes by nation-state actors and political extremists, as well, to use the technology to influence public discourse and spread propaganda.

Chatter surrounding deepfake methodology also has moved beyond the dark web to alternative social media sites…

Source…