Tag Archive for: Don’t

NSFW Facebook ads being used to spread dangerous malware — don’t click on these


Hackers have devised a clever new way to trick unsuspecting Facebook users into downloading malware on their computers.

While having your Facebook hacked is bad enough as it is, a new campaign discovered by Bitdefender uses compromised Facebook Business accounts to deliver the NodeStealer malware. 

Source…

Five things organizations don’t consider before a ransomware attack


Ransomware is generally considered to be one of the greatest threats facing organizations today. Following the release of the recent report on ransomware by the National Cyber Security Centre, the Rt Hon Tom Tugendhat, Minister of State, said ransomware attacks are evolving and that “the rollout of ransomware as a service means an advanced knowledge of computing is no longer needed to reap havoc; criminals are able to access software that will do much of the hard work for them.”

Despite heightened risks, awareness of the true risks posed by a ransomware attack remains low, with many organizations operating without incident response plans and rarely or never testing their cyber defenses. Many will be aware of some of the more high-profile ransomware attacks such as the MOVEit compromise, arguably the largest hack of the year, which impacted several large UK organizations, but are likely to assume that their size protects them from being targeted – particularly if they are smaller.

Source…

Don’t trust that update! Untold number of Android users duped by dangerous SpyNote trojan


Android users have been put on spyware high-alert as a banking trojan by the name of SpyNote has recently returned to the limelight.

The Android-based malware has been a background security threat for users since 2022. However, now in its third revision and with source code of of one of its variants (known as ‘CypherRat’) having leaked online in January of 2023, detections of this spyware have spiked throughout the year.

Source…

Don’t expect quick fixes in ‘red-teaming’ of AI models. Security was an afterthought


BOSTON — White House officials concerned by AI chatbots’ potential for societal harm and the Silicon Valley powerhouses rushing them to market are heavily invested in a three-day competition ending Sunday at the DefCon hacker convention in Las Vegas.

Some 2,200 competitors tapped on laptops seeking to expose flaws in eight leading large-language models representative of technology’s next big thing. But don’t expect quick results from this first-ever independent “red-teaming” of multiple models.

Findings won’t be made public until about February. And even then, fixing flaws in these digital constructs — whose inner workings are neither wholly trustworthy nor fully fathomed even by their creators — will take time and millions of dollars.

Current AI models are simply too unwieldy, brittle and malleable, academic and corporate research shows. Security was an afterthought in their training as data scientists amassed breathtakingly complex collections of images and text. They are prone to racial and cultural biases, and easily manipulated.

“It’s tempting to pretend we can sprinkle some magic security dust on these systems after they are built, patch them into submission, or bolt special security apparatus on the side,” said Gary McGraw, a cybsersecurity veteran and co-founder of the Berryville Institute of Machine Learning. DefCon competitors are “more likely to walk away finding new, hard problems,” said Bruce Schneier, a Harvard public-interest technologist. “This is computer security 30 years ago. We’re just breaking stuff left and right.”

Michael Sellitto of Anthropic, which provided one of the AI testing models, acknowledged in a press briefing that understanding their capabilities and safety issues “is sort of an open area of scientific inquiry.”

Conventional software uses well-defined code to issue explicit, step-by-step instructions. OpenAI’s ChatGPT, Google’s Bard and other language models are different. Trained largely by ingesting — and classifying — billions of datapoints in internet crawls, they are perpetual works-in-progress, an unsettling prospect given their transformative potential for humanity.

After publicly releasing chatbots last fall, the…

Source…