Tag Archive for: Artificial

Artificial intelligence Can Exacerbate Ransomware Attacks, Warns UK’s National Cyber Security Center


UK-based organizations and businesses have always been prominent victims of cyber threats – particularly ransomware. Britain’s cyber mastermind has recently investigated the role of AI and predict that the number of these attacks will only increase with time. Hackers will get ample chances to breach sensitive data due to the convenience that AI provides.

The National Cyber Security Center released a report stating their findings. According to them, AI removes the entry barrier for hackers who are new to the game. They can easily get into any system and carry out malicious activities without getting caught. Targeting victims will be a piece of cake with AI being available round the clock.

The NCSC claims that the next two years will significantly increase global ransomware threat incidents. Contemporary criminals have created criminal generative AI, more popularly referred to as “GENAI.” They are all set to offer it as a service, for people who can afford it. This service will make it even easier for any layman to enter into office systems and hack them.

Lindy Cameron who is the chief executive at NCSC, urges companies to remain at pace with modern cyber security tools. She emphasizes the importance of using AI productively for risk management on cyber threats.

Ransomware is the most frequent form of cybercrime, with good reason. It offers substantial financial compensation and has a well-established business model. Moreover, with the integration of AI, it’s evident that ransomware attacks are not going anywhere.

The Director General, James Babbage at NSA further ascertains that the released report is factually correct. Criminals will continue exploiting AI for their benefit and businesses must upscale to deal with it. AI increases the speed and abilities of already existing cyberattack schemes. It offers an easy entry point for all kinds of cyber criminals – regardless of their expertise or experience. Babbage also talks about child sexual abuse and fraud – both of which will also be affected as this world advances.

The British Government is strategically working on its cyber security plan. As of the latest reports, £2.6 billion ($3.3 billion) has been invested to…

Source…

Artificial Intelligence bolsters growth of cyber-attacks, audacity of cybercriminals


Listen to this article

A year out from generative AI’s widespread release to the public, cybercriminals continue to finesse AI tools to bolster the scale, speed, scope, and stealth of their activities.

Horton

“AI-driven cybersecurity threats are developing at a place that we have not seen before due to advancements in machine learning and the ability to amplify existing attack methodologies,” said Brendan Horton, a security analyst in the FoxPointe Solutions Information Risk Management Division of The Bonadio Group

From January to February 2023, researchers from Darktrace – a global leader in cybersecurity AI – saw a 135% increase in novel social engineering attacks, corresponding with the widespread adoption of ChatGPT, which was released to the public in October 2022.

“AI isn’t really a new technology, but it has gained a new attraction in recent years,” Horton said. “Now with generative AI tools you don’t really have to be a sophisticated cybercriminal to launch a cyberattack.”

These cyberattacks include AI-powered botnets (a network of hijacked computers) and enhanced social engineering and phishing campaigns which are increasingly easier for employees to fall for.

Miller

“From a business standpoint, phishing emails that can lead to either ransomware or other threats are becoming more adaptive and they’re becoming more authentic because of AI,” said Tim Miller, chief information officer at Community Bank, N.A.

For example, in pre-generative AI, a phishing email sent to an employee in Rochester by an overseas bad actor pretending to be a vendor in Buffalo could have linguistic red flags that would alert the employee not to respond.

“AI doesn’t make mistakes like misspellings,” said Horton. “Now with generative AI, we are seeing highly personalized messages that seem a lot more credible and are difficult to distinguish as threats. We’re also seeing more deep fake technology emerging with deep fake photos and audio.”

Overall, it’s vitally important for organizations to continue to conduct solid cyber-hygiene, educate their employees regularly on continuously evolving cyber threats, and not be afraid to use AI to their advantage.

“This…

Source…

[Webinar] Artificial Intelligence & Machine Learning in the Age of Ransomware & Data Breaches – October 25th, 1:00 pm – 2:00 pm EDT | Association of Certified E-Discovery Specialists (ACEDS)


Brian Wilson

Brian Wilson
Data Breach Advisory Services Managing Director
BDO

Brian leads our Data Breach Advisory services which assists organizations across the data breach lifecycle. We work with organizations to mitigate the risk of data breaches and identify when they occur; contain data breaches and minimize the impact on organizations; to holistically remediate vulnerabilities, harden defenses, incorporate lessons learned; and comply with regulatory reporting requirements, consumer data breach notifications laws, and third-party contractual obligations.

BDO’s ecosystem of capabilities, technologies, and partnerships are built on an uncompromising foundation of security, scalability, and defensibility. Our methodologies, agile approach, and tailored workflows assist organizations no matter where they are in the data breach lifecycle. Our subject matter expertise spans across legal, privacy, risk, compliance, crisis management, information governance, and cybersecurity. We adhere to industry standards, generally accepted frameworks and integrate leading, purpose-built, and emerging technologies including cloud, machine learning, and artificial intelligence to process information at scale and reduce the time it takes to report credible, reliable, and repeatable results with unwavering quality, consistency, and transparency.

Read Brian’s Full Bio

Source…

How Artificial Intelligence Is Changing Cyber Threats


Person looking at a visualization of an interconnected big data structure.
Image: NicoElNino/Adobe Stock

HackerOne, a security platform and hacker community forum, hosted a roundtable on Thursday, July 27, about the way generative artificial intelligence will change the practice of cybersecurity. Hackers and industry experts discussed the role of generative AI in various aspects of cybersecurity, including novel attack surfaces and what organizations should keep in mind when it comes to large language models.

Jump to:

Generative AI can introduce risks if organizations adopt it too quickly

Organizations using generative AI like ChatGPT to write code should be careful they don’t end up creating vulnerabilities in their haste, said Joseph “rez0” Thacker, a professional hacker and senior offensive security engineer at software-as-a-service security company AppOmni.

For example, ChatGPT doesn’t have the context to understand how vulnerabilities might arise in the code it produces. Organizations have to hope that ChatGPT will know how to produce SQL queries that aren’t vulnerable to SQL injection, Thacker said. Attackers being able to access user accounts or data stored across different parts of the organization often cause vulnerabilities that penetration testers frequently look for, and ChatGPT might not be able to take them into account in its code.

The two main risks for companies that may rush to use generative AI products are:

  • Allowing the LLM to be exposed in any way to external users that have access to internal data.
  • Connecting different tools and plugins with an AI feature that may access untrusted data, even if it’s internal.

How threat actors take advantage of generative AI

“We have to remember that systems like GPT models don’t create new things — what they do is reorient stuff that already exists … stuff it’s already been trained on,” said Klondike. “I think what we’re going to see is people who aren’t very technically skilled will be able to have access to their own GPT models that can teach them about the code or help them build ransomware that already exists.”

Prompt injection

Anything that browses the internet — as an LLM can do — could create this kind of problem.

One possible avenue of cyberattack on…

Source…