Tag Archive for: Google’s

What is the Titan M2 security chip in Google’s Pixel phones?


Google IO 2022 titan m2 in the pixel 6a

With the Pixel 6 series, Google began developing its in-house Tensor SoC. But that wasn’t the first time the search giant used a piece of custom silicon in its smartphones – the Pixel 2’s Pixel Visual Core was technically the first. One generation later, the company announced that Pixel 3 devices would include a hardware security module dubbed Titan M. Then, in 2021, Google followed it up with the Titan M2. The security chip has since become a selling point for Google phones like the Pixel 8 series.

So in this article, let’s take a closer look at the role of the Titan M2 in Pixel devices, how it works, and why it’s even necessary in the first place.

What is the Titan M2 chip all about?

Picture showing Google's Titan and Titan M security chip

Google’s Titan server chip (left) and first-generation Titan M security chip (right)

The Titan M2 is a dedicated security chip included in Pixel 6 and Pixel 7 series smartphones. You’ll also find it in some other Google products like the Pixel Tablet. Google designed the Titan M2 in-house so that it could exercise complete control over its feature set. The chip is based on the RISC-V CPU architecture and contains its own memory, RAM, and cryptographic accelerator.

The Titan M2 is one of the many measures Google has employed to improve smartphone security over the years. The company uses the chip in its Pixel phones to provide an additional layer of protection on top of Android’s default security measures.

Google designed the Titan M2 chip to augment Android’s default security measures.

Take Android’s mandatory full-disk encryption. On most devices, it relies on a security feature known as a Trusted Execution Environment (TEE), which is essentially the secure area of a processor. Android devices store their encryption keys within this secure area, which is in turn guarded with your pattern, PIN, or passcode. In other words, the TEE isolates cryptographic keys and never reveals them to the user or even the operating system.

Virtually all smartphone SoCs in this day and age have a TEE or similar secure environment. On Snapdragon chips, it’s commonly referred to as the Qualcomm Secure Execution Environment (QSEE). Apple’s Arm-based chips like the M1 have the Secure Enclave. With these…

Source…

Google’s new security pilot program will ban employee Internet access


A large Google logo is displayed amidst foliage.

The Internet is dangerous, so what if you just didn’t use it? That’s the somewhat ironic recommendation Google, one of the world’s largest Internet companies, is making to its employees. CNBC’s Jennifer Elias reports that Google is “starting a new pilot program where some employees will be restricted to Internet-free desktop PCs” while they work. An internal memo seen by CNBC notes that “Googlers are frequent targets of attacks” by criminals, and a great way to combat that is to not be on the Internet.

Employees that work at major tech companies are a much richer target for criminals compared to normal people. Tech company employees have all sorts of access to sensitive data, and compromising a single employee could lead to exploiting sensitive infrastructure. Just last week, Microsoft was targeted by a Chinese espionage hacking group that somehow stole a cryptographic key to bypass Microsoft’s authentication systems, giving it access to 25 organizations, including multiple government agencies.

The report says Google’s new pilot program “will disable Internet access on the select desktops, with the exception of internal web-based tools and Google-owned websites like Google Drive and Gmail.” This was originally mandatory for the 2,500 employees that were selected, but after “receiving feedback”—we’re going to assume that was very enthusiastic feedback—Google is letting employees opt out of the program. The company also wants some employees to work without root access, which is common sense for a lot of computer roles, but not really for developers, which are used to being able to install new programs and tools.

Being banned from the entire Internet would be tough, but Googlers in the high-security program will still get access to “Google-owned websites,” which is actually quite a bit of the Internet. Google Search would be useless, but you could probably live a pretty good Internet life, writing documents, sending emails, taking notes, chatting with people, and watching YouTube.

It would presumably still be possible to be emailed a virus attachment, but…

Source…

According to Researchers, Google’s Bard Presents a Ransomware Threat / Digital Information World


The introduction of AI is revolutionary in and of itself. But with such a rapidly evolving technology accessible to common folks, the chances of users exploiting it for unethical and fraudulent purposes are high. Google’s AI chatbot, Bard, is reported to willingly produce harmful phishing emails when given prompts. By tweaking the wording of those prompts in a specific manner, Bard even can generate basic ransomware code. Check Point, a cybersecurity firm, stated that Bard has gone beyond its competitor, ChatGPT when it comes to cybersecurity.

In light of recent worries regarding the potential misuse of OpenAI’s large-language model in generating harmful programs and threats, Check Point conducted a research proceeding with absolute caution. ChatGPT has enhanced security measures in comparison to Google’s Bard, which has yet to reach that level of security.

Check Point’s researchers gave both ChatGPT and Bard identical prompts. Upon querying for phishing emails were refused by both AI programs. But the findings showcased the difference between both AI programs — ChatGPT explicitly stated that engaging in such activities was considered fraudulent, Bard, on the other hand, claimed that it could not fulfill the request. Furthermore, results showed that ChatGPT continued to decline their request when prompted for a particular type of phishing email, while Bard began providing a well-written response.

However, both Bard and ChatGPT firmly refused when Check Point prompted them both to write a harmful ransomware code. They both declined no matter what, despite their attempts at tweaking the wording a bit by telling the AI programs that it was just for security purposes. But it didn’t take the researchers that long to get around Bard’s security measures. They instructed the AI model to describe common behaviours performed by ransomware, and results showed that Bard had spurted out an entire array of malicious activities in response.

Subsequently, the team went further to append the list of ransomware functions generated by the AI model. They asked it to provide a code to do certain tasks, but Bard’s security was foolproof and claimed it could not proceed with such a…

Source…