Author Archives: Google

I/O 2022: Android 13 security and privacy (and more!)

Every year at I/O we share the latest on privacy and security features on Android. But we know some users like to go a level deeper in understanding how we’re making the latest release safer, and more private, while continuing to offer a seamless experience. So let’s dig into the tools we’re building to better secure your data, enhance your privacy and increase trust in the apps and experiences on your devices.

Low latency, frictionless security

Regardless of whether a smartphone is used for consumer or enterprise purposes, attestation is a key underpinning to ensure the integrity of the device and apps running on the device. Fundamentally, key attestation lets a developer bind a secret or designate data to a device. This is a strong assertion: "same user, same device" as long as the key is available, a cryptographic assertion of integrity can be made.

With Android 13 we have migrated to a new model for the provisioning of attestation keys to Android devices which is known as Remote Key Provisioning (RKP). This new approach will strengthen device security by eliminating factory provisioning errors and providing key vulnerability recovery by moving to an architecture where Google takes more responsibility in the certificate management lifecycle for these attestation keys. You can learn more about RKP here.

We’re also making even more modules updatable directly through Google Play System Updates so we can automatically upgrade more system components and fix bugs, seamlessly, without you having to worry about it. We now have more than 30 components in Android that can be automatically updated through Google Play, including new modules in Android 13 for Bluetooth and ultra-wideband (UWB).

Last year we talked about how the majority of vulnerabilities in major operating systems are caused by undefined behavior in programming languages like C/C++. Rust is an alternative language that provides the efficiency and flexibility required in advanced systems programming (OS, networking) but Rust comes with the added boost of memory safety. We are happy to report that Rust is being adopted in security critical parts of Android, such as our key management components and networking stacks.

Hardening the platform doesn’t just stop with continual improvements with memory safety and expansion of anti-exploitation techniques. It also includes hardening our API surfaces to provide a more secure experience to our end users.

In Android 13 we implemented numerous enhancements to help mitigate potential vulnerabilities that app developers may inadvertently introduce. This includes making runtime receivers safer by allowing developers to specify whether a particular broadcast receiver in their app should be exported and visible to other apps on the device. On top of this, intent filters block non-matching intents which further hardens the app and its components.

For enterprise customers who need to meet certain security certification requirements, we’ve updated our security logging reporting to add more coverage and consolidate security logs in one location. This is helpful for companies that need to meet standards like Common Criteria and is useful for partners such as management solutions providers who can review all security-related logs in one place.

Privacy on your terms

Android 13 brings developers more ways to build privacy-centric apps. Apps can now implement a new Photo picker that allows the user to select the exact photos or videos they want to share without having to give another app access to their media library.

With Android 13, we’re also reducing the number of apps that require your location to function using the nearby devices permission introduced last year. For example, you won’t have to turn on location to enable Wi-fi for certain apps and situations. We’ve also changed how storage works, requiring developers to ask for separate permissions to access audio, image and video files.

Previously, we’ve limited apps from accessing your clipboard in the background and alerted you when an app accessed it. With Android 13, we’re automatically deleting your clipboard history after a short period so apps are blocked from seeing old copied information.

In Android 11, we began automatically resetting permissions for apps you haven’t used for an extended period of time, and have since expanded the feature to devices running Android 6 and above. Since then, we’ve automatically reset over 5 billion permissions.

In Android 13, app makers can go above and beyond in removing permissions even more proactively on behalf of their users. Developers will be able to provide even more privacy by reducing the time their apps have access to unneeded permissions.

Finally, we know notifications are critical for many apps but are not always of equal importance to users. In Android 13, you’ll have more control over which apps you would like to get alerts from, as new apps on your device are required to ask you for permission by default before they can send you notifications.

Apps you can trust

Most app developers build their apps using a variety of software development kits (SDKs) that bundle in pre-packaged functionality. While SDKs provide amazing functionality, app developers typically have little visibility or control over the SDK code or insight into their performance.

We’re working with developers to make their apps more secure with a new Google Play SDK Index that helps them see SDK safety and reliability signals before they build the code into their apps. This ensures we're helping everyone build a more secure and private app ecosystem.

Last month, we also started rolling out a new Data safety section in Google Play to help you understand how apps plan to collect, share, and protect your data, before you install it. To instill even more trust in Play apps, we're enabling developers to have their apps independently validated against OWASP’s MASVS, a globally recognized standard for mobile app security.

We’re working with a small group of developers and authorized lab partners to evolve the program. Developers who have completed this independent validation can showcase this on their Data safety section.

Additional mobile security and safety

Just like our anti-malware protection Google Play, which now scans 125 billion apps a day, we believe spam and phishing detection should be built in. We’re proud to announce that in a recent analyst report, Messages was the highest rated built-in messaging app for anti-phishing and scams protection.

Messages is now also helping to protect you against 1.5 billion spam messages per month, so you can avoid both annoying texts and attempts to access your data. These phishing attempts are increasingly how bad actors are trying to get your information, by getting you to click on a link or download an app, so we are always looking for ways to offer another line of defense.

Last year, we introduced end-to-end encryption in Messages to provide more security for your mobile conversations. Later this year, we’ll launch end-to-end encryption group conversations in beta to ensure your personal messages get even more protection.

As with a lot of features we build, we try to do it in an open and transparent way. In Android 11 we announced a new platform feature that was backed by an ISO standard to enable the use of digital IDs on a smartphone in a privacy-preserving way. When you hand over your plastic license (or other credential) to someone for verification it’s all or nothing which means they have access to your full name, date of birth, address, and other personally identifiable information (PII). The mobile version of this allows for much more fine-grained control where the end user and/or app can select exactly what to share with the verifier. In addition, the verifier must declare whether they intend to retain the data returned. In addition, you can present certain details of your credentials, such as age, without revealing your identity.

Over the last two Android releases we have been improving this API and making it easier for third-party organizations to leverage it for various digital identity use cases, such as driver’s licenses, student IDs, or corporate badges. We’re now announcing that Google Wallet uses Android Identity Credential to support digital IDs and driver’s licenses. We’re working with states in the US and governments around the world to bring digital IDs to Wallet later this year. You can learn more about all of the new enhancements in Google Wallet here.

Protected by Android

We don’t think your security and privacy should be hard to understand and control. Later this year, we’ll begin rolling out a new destination in settings on Android 13 devices that puts all your device security and data privacy front and center.

The new Security & Privacy settings page will give you a simple, color-coded way to understand your safety status and will offer clear and actionable guidance to improve it. The page will be anchored by new action cards that notify you of critical steps you should take to address any safety risks. In addition to notifications to warn you about issues, we’ll also provide timely recommendations on how to enhance your privacy.

We know that to feel safe and in control of your data, you need to have a secure foundation you can count on. Because if your device isn’t secure, it’s not private either. We’re working hard to make sure you’re always protected by Android. Learn more about these protections on our website.

How we fought bad apps and developers in 2021

Providing a safe experience to billions of users continues to be one of the highest priorities for Google Play. Last year we introduced multiple privacy focused features, enhanced our protections against bad apps and developers, and improved SDK data safety. In addition, Google Play Protect continues to scan billions of installed apps each day across billions of devices to keep people safe from malware and unwanted software.

We continue to enhance our machine learning systems and review processes, and in 2021 we blocked 1.2 million policy violating apps from being published on Google Play, preventing billions of harmful installations. We also continued in our efforts to combat malicious and spammy developers, banning 190k bad accounts in 2021. In addition, we have closed around 500k developer accounts that are inactive or abandoned.

In May we announced our new Data safety section for Google Play where developers will be required to give users deeper insight into the privacy and security practices of the apps they download, and provide transparency into the data the app may collect and why. The Data safety section launched this week, and developers are required to complete this section for their apps by July 20th.

We’ve also invested in making life easier for our developers. We added the Policy and Programs section to Google Play Console to help developers manage all their app compliance issues in one central location. This includes the ability to appeal a decision and track its status from this page.

In addition, we continued to partner with SDK developers to improve app safety, limit how user data is shared, and improve lines of communication with app developers. SDKs provide functionality for app developers, but it can sometimes be tricky to know when an SDK is safe to use. Last year, we engaged with SDK developers to build a safer Android and Google Play ecosystem. As a result of this work, SDK developers have improved the safety of SDKs used by hundreds of thousands of apps impacting billions of users. This remains a huge investment area for our team, and we will continue in our efforts to make SDKs safer across the ecosystem.

Limiting access

The best way to ensure users' data stays safe is to limit access to it in the first place.

As a result of new platform protections and policies, developer collaboration and education, 98% of apps migrating to Android 11 or higher have reduced their access to sensitive APIs and user data. We've also significantly reduced the unnecessary, dangerous, or disallowed use of Accessibility APIs in apps migrating to Android 12, while preserving the functionality of legitimate use cases.

We also continued in our commitment to make Android a great place for families. Last year we disallowed the collection of Advertising ID (AAID) and other device identifiers from all users in apps solely targeting children, and gave all users the ability to delete their Advertising ID entirely, regardless of the app.

Pixel enhancements

For Pixel users, we had even more great features to help keep you safe. Our new Security hub helps protect your phone, apps, Google Account, and passwords by giving you a central view of your device’s current configuration. Security hub also provides recommendations to improve your security, helping you decide what settings best meet your needs.

In addition, Pixels now use new machine learning models that improve the detection of malware in Google Play Protect. The detection runs on your Pixel, and uses a privacy preserving technology called federated analytics to discover bad apps.

Our global teams are dedicated to keeping our billions of users safe, and look forward to many exciting announcements in 2022.

What’s up with in-the-wild exploits? Plus, what we’re doing about it.

If you are a regular reader of our Chrome release blog, you may have noticed that phrases like 'exploit for CVE-1234-567 exists in the wild' have been appearing more often recently. In this post we'll explore why there seems to be such an increase in exploits, and clarify some misconceptions in the process. We'll then share how Chrome is continuing to make it harder for attackers to achieve their goals.

How things work today

While the increase may initially seem concerning, it’s important to understand the reason behind this trend. If it's because there are many more exploits in the wild, it could point to a worrying trend. On the other hand, if we’re simply gaining more visibility into exploitation by attackers, it's actually a good thing! It’s good because it means we can respond by providing bug fixes to our users faster, and we can learn more about how real attackers operate.

So, which is it? It’s likely a little of both.

Our colleagues at Project Zero publicly track all known in-the-wild “zero day” bugs. Here’s what they’ve reported for browsers:

First, we don’t believe there was no exploitation of Chromium based browsers between 2015 and 2018. We recognize that we don’t have full view into active exploitation, and just because we didn’t detect any zero-days during those years, doesn’t mean exploitation didn’t happen. Available exploitation data suffers from sampling bias.

Teams like Google’s Threat Analysis Group are also becoming increasingly sophisticated in their efforts to protect users by discovering zero-days and in-the-wild attacks. A good example is a bug in our Portals feature that we fixed last fall. This bug was discovered by a team member in Switzerland and reported to Chrome through our bug tracker. While Chrome normally keeps each web page locked away in a box called the “renderer sandbox,” this bug allowed the code to break out, potentially allowing attackers to steal information. Working across multiple time zones and teams, it took the team three days to come up with a fix and roll it out, as detailed in our video on the process:


Why so many exploits?

There are a number of factors at play, from changes in vendor and attacker behavior, to changes in the software itself. Here are four in particular that we've been discussing and exploring as a team.

First, we believe we’re seeing more bugs thanks to vendor transparency. Historically, many browser makers didn’t announce that a bug was being exploited in the wild, even if they knew it was happening. Today, most major browser makers have increased transparency via publishing details in release communications, and that may account for more publicly tracked “in the wild” exploitation. These efforts have been spearheaded by both browser security teams and dedicated research groups, such as Project Zero.

Second, we believe we’re seeing more exploits due to evolved attacker focus. There are two reasons to suspect attackers might be choosing to attack Chrome more than they did in the past.

  • Flash deprecation: In 2015 and 2016, Flash was a primary exploitation target. Chrome gradually made Flash a less attractive target for attackers (for instance requiring user clicks to activate Flash content) before finally removing it in Chrome 88 in January last year. As Flash is no longer available, attackers have had to switch to a harder target: the browser itself.
  • Chromium popularity: Attackers go for the most popular target. In early 2020, Edge switched to using the Chromium rendering engine. If attackers can find a bug in Chromium, they can now attack a greater percentage of users.

Third, some attacks that could previously be accomplished with a single bug now require multiple bugs. Before 2015, only a single in-the-wild bug was required to steal a user’s secrets from other websites, because multiple web pages lived together in a single renderer process. If an attacker could compromise the renderer process belonging to a malicious website that a user visited, they might have been able to access the credentials for some other more sensitive website.

With Chrome’s multiyear Site Isolation project largely complete, a single bug is almost never sufficient to do anything really bad. Attackers often need to chain at least two bugs: first, to compromise the renderer process, and second, to jump into the privileged Chrome browser process or directly into the device operating system. Sometimes multiple bugs are needed to achieve one or both of these steps.

So, to achieve the same result, an attacker generally now has to use more bugs than they previously did. For exactly the same level of attacker success, we’d see more in-the-wild bugs reported over time, as we add more layers of defense that the attacker needs to bypass.

Fourth, there’s simply the fact that software has bugs. Some fraction of those bugs are exploitable. Browsers increasingly mirror the complexity of operating systems — providing access to your peripherals, filesystem, 3D rendering, GPUs — and more complexity means more bugs.

Ultimately, we believe data is an important part of the story, but the absolute number of exploited bugs isn't a sufficient measure of security risk. Since some security bugs are inevitable, how a software vendor architects their software (so that the impact of any single bug is limited) and responds to critical security bugs is often much more important than the specifics of any single bug.

How Chrome is raising the bar

The Chrome team works hard to both detect and fix bugs before releases and get bug fixes out to users as quickly as possible. We’re proud of our record at fixing serious bugs quickly, and we are continually working to do better.

For example, one area of concern for us is the risk of n-day attacks: that is, exploitation of bugs we’ve already fixed, where the fixes are visible in our open-source code repositories. We have greatly reduced our “patch gap” from 35 days in Chrome 76 to an average of 18 days in subsequent milestones, and we expect this to reduce slightly further with Chrome’s faster release cycle.

Irrespective of how quickly bugs are fixed, any in-the-wild exploitation is bad. Chrome is working hard to make it expensive and difficult for attackers to achieve their goals.

Some examples of the projects ongoing:

  • We continue to strengthen Site Isolation, especially on Android.
  • The V8 heap sandbox will prevent attackers using JavaScript just-in-time (JIT) compilation bugs to compromise the renderer process. This will require attackers to add a third bug to these exploit chains, which means increased security, but could increase the amount of in-the-wild exploits reported.
  • The MiraclePtr and *Scan projects aim to prevent exploitability of many of our largest class of browser process bugs, called “use-after-free”. We will be applying similar systematic solutions to other classes of bugs over time.
  • Since “memory safety” bugs account for 70% of the exploitable security bugs, we aim to write new parts of Chrome in memory-safe languages.
  • We continue to work on post-exploitation mitigations such as CET and CFG.

We are well past the stage of having “easy wins” when it comes to raising the bar for security. All of these are long term projects with significant engineering challenges. But as we've shown with Site Isolation, Chrome isn't afraid of making long term investments in major security engineering projects. One of the major challenges is performance: all of these technologies (except memory safe languages) could risk slowing the browser. Expect a series of blog posts over the coming months as we explore performance vs. security trade-offs. These decisions are really hard: we do not want to make Chrome slower for billions of people, especially as this disproportionately hits users with slower devices – we strive to make Chrome secure for all our users, not just those with the high end systems.

How you can help

Above all: if Chrome is reminding you to update, please do!

If you’re an enterprise IT professional, keep your users up-to-date by keeping auto-update on, and familiarize yourself with the added enterprise policies and controls that you can apply to Chrome within your organization. We strongly advise not focusing on zero-days when making decisions about updates, but instead to assume any Chrome security bug is under exploitation as an n-day.

If you're a security researcher, you can report bugs you find to the Chrome Vulnerability Rewards Program — and thanks for helping us make Chrome safer for everyone!

Empowering the next generation of Android Application Security Researchers

The external security researcher community plays an integral role in making the Google Play ecosystem safe and secure. Through this partnership with the community, Google has been able to collaborate with third-party developers to fix thousands of security issues in Android applications before they are exploited and reward security researchers for their hard work and dedication.

In order to empower the next generation of Android security researchers, Google has collaborated with industry partners including HackerOne and PayPal to host a number of Android App Hacking Workshops. These workshops are an effort designed to educate security researchers and cybersecurity students of all skill levels on how to find Android application vulnerabilities through a series of hands-on working sessions, both in-person and virtual.

Through these workshops, we’ve seen attendees from groups such as Merritt College's cybersecurity program and alumni of Hack the Hood go on to report real-world security vulnerabilities to the Google Play Security Rewards program. This reward program is designed to identify and mitigate vulnerabilities in apps on Google Play, and keep Android users, developers and the Google Play ecosystem safe.

Today, we are releasing our slide deck and workshop materials, including source code for a custom-built Android application that allows you to test your Android application security skills in a variety of capture the flag style challenges.

These materials cover a wide range of techniques for finding vulnerabilities in Android applications. Whether you’re just getting started or have already found many bugs - chances are you’ll learn something new from these challenges! If you get stuck and need a hint on solving a challenge, the solutions for each are available in the Android App Hacking Workshop here.

As you work through the challenges and learn more about the techniques and tips described in our workshop materials, we’d love to hear your feedback.

Additional Resources:

  • If you want to learn more about how to prepare, launch, and run a Vulnerability Disclosure Program (VDP) or discover how to work with external security researchers, check out our VDP course here.
  • If you’re a developer looking to build more secure applications, check out Android app security best practices here.

Pixel 6: Setting a new standard for mobile security

With Pixel 6 and Pixel 6 Pro, we’re launching our most secure Pixel phone yet, with 5 years of security updates and the most layers of hardware security. These new Pixel smartphones take a layered security approach, with innovations spanning across the Google Tensor system on a chip (SoC) hardware to new Pixel-first features in the Android operating system, making it the first Pixel phone with Google security from the silicon all the way to the data center. Multiple dedicated security teams have also worked to ensure that Pixel’s security is provable through transparency and external validation.

Secure to the Core

Google has put user data protection and transparency at the forefront of hardware security with Google Tensor. Google Tensor’s main processors are Arm-based and utilize TrustZone™ technology. TrustZone is a key part of our security architecture for general secure processing, but the security improvements included in Google Tensor go beyond TrustZone.

Figure 1. Pixel Secure Environments

The Google Tensor security core is a custom designed security subsystem dedicated to the preservation of user privacy. It's distinct from the application processor, not only logically, but physically, and consists of a dedicated CPU, ROM, one-time-programmable (OTP) memory, crypto engine, internal SRAM, and protected DRAM. For Pixel 6 and 6 Pro, the security core’s primary use cases include protecting user data keys at runtime, hardening secure boot, and interfacing with Titan M2TM.

Your secure hardware is only as good as your secure OS, and we are using Trusty, our open source trusted execution environment. Trusty OS is the secure OS used both in TrustZone and the Google Tensor security core.

With Pixel 6 and Pixel 6 Pro your security is enhanced by the new Titan M2TM, our discrete security chip, fully designed and developed by Google. In this next generation chip, we moved to an in-house designed RISC-V processor, with extra speed and memory, and made it even more resilient to advanced attacks. Titan M2TM has been tested against the most rigorous standard for vulnerability assessment, AVA_VAN.5, by an independent, accredited evaluation lab. Titan M2™ supports Android Strongbox, which securely generates and stores keys used to protect your PINs and password, and works hand-in-hand with Google Tensor security core to protect user data keys while in use in the SoC.

Moving a step higher in the system, Pixel 6 and Pixel 6 Pro ship with Android 12 and a slew of Pixel-first and Pixel-exclusive features.

Enhanced Controls

We aim to give users better ways to control their data and manage their devices with every release of Android. Starting with Android 12 on Pixel, you can use the new Security hub to manage all your security settings in one place. It helps protect your phone, apps, Google Account, and passwords by giving you a central view of your device’s current configuration. Security hub also provides recommendations to improve your security, helping you decide what settings best meet your needs.

For privacy, we are launching Privacy Dashboard, which will give you a simple and clear timeline view of the apps that have accessed your location, microphone and camera in the last 24 hours. If you notice apps that are accessing more data than you expected, the dashboard provides a path to controls to change those permissions on the fly.

To provide additional transparency, new indicators in Pixel’s status bar will show you when your camera and mic are being accessed by apps. If you want to disable that access, new privacy toggles give you the ability to turn off camera or microphone access across apps on your phone with a single tap, at any time.

The Pixel 6 and Pixel 6 Pro also include a toggle that lets you remove your device’s ability to connect to less-secure 2G networks. While necessary in certain situations, accessing 2G networks can open up additional attack vectors; this toggle helps users mitigate those risks when 2G connectivity isn’t needed.

Built-in security

By making all of our products secure by default, Google keeps more people safe online than anyone else in the world. With the Pixel 6 and Pixel 6 Pro, we’re also ratcheting up the dial on default, built-in protections.

Our new optical under-display fingerprint sensor ensures that your biometric information is secure and never leaves your device. As part of our ongoing security development lifecycle, Pixel 6 and 6 Pro’s fingerprint unlock has been externally validated by security experts as a strong and secure biometric unlock mechanism meeting the Class 3 strength requirements defined in the Android 12 Compatibility Definition Document (CDD).

Phishing continues to be a huge attack vector, affecting everyone across different devices.

The Pixel 6 and Pixel 6 Pro introduce new anti-phishing protections. Built-in protections automatically scan for potential threats from phone calls, text messages, emails, and links sent through apps, notifying you if there’s a potential problem.

Users are also now better protected against bad apps by enhancements to our on-device detection capabilities within Google Play Protect. Since its launch in 2017, Google Play Protect has provided the ability to detect malicious applications even when the device is offline. The Pixel 6 and Pixel 6 Pro uses new machine learning models that improve the detection of malware in Google Play Protect. The detection runs on your Pixel, and uses a privacy preserving technology called federated analytics to discover commonly-run bad apps. This will help to further protect over 3 billion users by improving Google Play Protect, which already analyzes over 100 billion apps every day to detect threats.

Many of Pixel’s privacy-preserving features run inside Private Compute Core, an open source sandbox isolated from the rest of the operating system and apps. Our open source Private Compute Services manages network communication for these features, and uses federated learning, federated analytics, and private information retrieval to improve features while preserving privacy. Some features already running on Private Compute Core include Live Caption, Now Playing, and Smart Reply suggestions.

Google Binary Transparency (GBT) is the newest addition to our open and verifiable security infrastructure, providing a new layer of software integrity for your device. Building on the principles pioneered by Certificate Transparency, GBT helps ensure your Pixel is only running verified OS software. It works by using append-only logs to store signed hashes of the system images. The logs are public and can be used to verify that what’s published is the same as what’s on the device – giving users and researchers the ability to independently verify OS integrity for the first time.

Beyond the Phone

Defense-in-depth isn’t just a matter of hardware and software layers. Security is a rigorous process. Pixel 6 and Pixel 6 Pro benefit from in-depth design and architecture reviews, memory-safe rewrites to security critical code, static analysis, formal verification of source code, fuzzing of critical components, and red-teaming, including with external security labs to pen-test our devices. Pixel is also part of the Android Vulnerability Rewards Program, which paid out $1.75 million last year, creating a valuable feedback loop between us and the security research community and, most importantly, helping us keep our users safe.

Capping off this combined hardware and software security system, is the Titan Backup Architecture, which gives your Pixel a secure foot in the cloud. Launched in 2018, the combination of Android’s Backup Service and Google Cloud’s Titan Technology means that backed-up application data can only be decrypted by a randomly generated key that isn't known to anyone besides the client, including Google. This end-to-end service was independently audited by a third party security lab to ensure no one can access a user's backed-up application data without specifically knowing their passcode.

To top it all off, this end-to-end security from the hardware across the software to the data center comes with no fewer than 5 years of guaranteed Android security updates on Pixel 6 and Pixel 6 Pro devices from the date they launch in the US. This is an important commitment for the industry, and we hope that other smartphone manufacturers broaden this trend.

Together, our secure chipset, software and processes make Pixel 6 and Pixel 6 Pro the most secure Pixel phone yet.

An update on Memory Safety in Chrome

Security is a cat-and-mouse game. As attackers innovate, browsers always have to mount new defenses to stay ahead, and Chrome has invested in ever-stronger multi-process architecture built on sandboxing and site isolation. Combined with fuzzing, these are still our primary lines of defense, but they are reaching their limits, and we can no longer solely rely on this strategy to defeat in-the-wild attacks.

Last year, we showed that more than 70% of our severe security bugs are memory safety problems. That is, mistakes with pointers in the C or C++ languages which cause memory to be misinterpreted.

This sounds like a problem! And, certainly, memory safety is an issue which needs to be taken seriously by the global software engineering community. Yet it’s also an opportunity because many bugs have the same sorts of root-causes, meaning we may be able to squash a high proportion of our bugs in one step.

Chrome has been exploring three broad avenues to seize this opportunity:

  1. Make C++ safer through compile-time checks that pointers are correct.
  2. Make C++ safer through runtime checks that pointers are correct.
  3. Investigating use of a memory safe language for parts of our codebase.

“Compile-time checks” mean that safety is guaranteed during the Chrome build process, before Chrome even gets to your device. “Runtime” means we do checks whilst Chrome is running on your device.

Runtime checks have a performance cost. Checking the correctness of a pointer is an infinitesimal cost in memory and CPU time. But with millions of pointers, it adds up. And since Chrome performance is important to billions of users, many of whom are using low-power mobile devices without much memory, an increase in these checks would result in a slower web.

Ideally we’d choose option 1 - make C++ safer, at compile time. Unfortunately, the language just isn’t designed that way. You can learn more about the investigation we've done in this area in Borrowing Trouble: The Difficulties Of A C++ Borrow-Checker that we're also publishing today.

So, we’re mostly left with options 2 and 3 - make C++ safer (but slower!) or start to use a different language. Chrome Security is experimenting with both of these approaches.

You’ll see major investments in C++ safety solutions - such as MiraclePtr and ABSL/STL hardened modes. In each case, we hope to eliminate a sizable fraction of our exploitable security bugs, but we also expect some performance penalty. For example, MiraclePtr prevents use-after-free bugs by quarantining memory that may still be referenced. On many mobile devices, memory is very precious and it’s hard to spare some for a quarantine. Nevertheless, MiraclePtr stands a chance of eliminating over 50% of the use-after-free bugs in the browser process - an enormous win for Chrome security, right now.

In parallel, we’ll be exploring whether we can use a memory safe language for parts of Chrome in the future. The leading contender is Rust, invented by our friends at Mozilla. This is (largely) compile-time safe; that is, the Rust compiler spots mistakes with pointers before the code even gets to your device, and thus there’s no performance penalty. Yet there are open questions about whether we can make C++ and Rust work well enough together. Even if we started writing new large components in Rust tomorrow, we’d be unlikely to eliminate a significant proportion of security vulnerabilities for many years. And can we make the language boundary clean enough that we can write parts of existing components in Rust? We don’t know yet. We’ve started to land limited, non-user-facing Rust experiments in the Chromium source code tree, but we’re not yet using it in production versions of Chrome - we remain in an experimental phase.

That’s why we’re pursuing both strategies in parallel. Watch this space for updates on our adventures in making C++ safer, and efforts to experiment with a new language in Chrome.

Introducing Android’s Private Compute Services

We introduced Android’s Private Compute Core in Android 12 Beta. Today, we're excited to announce a new suite of services that provide a privacy-preserving bridge between Private Compute Core and the cloud.

Recap: What is Private Compute Core?

Android’s Private Compute Core is an open source, secure environment that is isolated from the rest of the operating system and apps. With each new Android release we’ll add more privacy-preserving features to the Private Compute Core. Today, these include:

  • Live Caption, which adds captions to any media using Google’s on-device speech recognition
  • Now Playing, which recognizes music playing nearby and displays the song title and artist name on your device’s lock screen
  • Smart Reply, which suggests relevant responses based on the conversation you’re having in messaging apps

For these features to be private, they must:

  1. Keep the information on your device private. Android ensures that the sensitive data processed in the Private Compute Core is not shared to any apps without you taking an action. For instance, until you tap a Smart Reply, the OS keeps your reply hidden from both your keyboard and the app you’re typing into.
  2. Let your device use the cloud (to download new song catalogs or speech-recognition models) without compromising your privacy. This is where Private Compute Services comes in.

Introducing Android’s Private Compute Services

Machine learning features often improve by updating models, and Private Compute Services helps features get these updates over a private path. Android prevents any feature inside the Private Compute Core from having direct access to the network. Instead, features communicate over a small set of purposeful open-source APIs to Private Compute Services, which strips out identifying information and uses a set of privacy technologies, including Federated Learning, Federated Analytics, and Private information retrieval.

We will publicly publish the source code for Private Compute Services, so it can be audited by security researchers and other teams outside of Google. This means it can go through the same rigorous security programs that ensure the safety of the Android platform.

We’re enthusiastic about the potential for machine learning to power more helpful features inside Android, and Android’s Private Compute Core will help users benefit from these features while strengthening privacy protections via the new Private Compute Services. Android is the first open source mobile OS to include this kind of externally verifiable privacy; Private Compute Services helps the Android OS continue to innovate in machine learning, while also maintaining the highest standards of privacy and security.

Protecting more with Site Isolation

Chrome's Site Isolation is an essential security defense that makes it harder for malicious web sites to steal data from other web sites. On Windows, Mac, Linux, and Chrome OS, Site Isolation protects all web sites from each other, and also ensures they do not share processes with extensions, which are more highly privileged than web sites. As of Chrome 92, we will start extending this capability so that extensions can no longer share processes with each other. This provides an extra line of defense against malicious extensions, without removing any existing extension capabilities.

Meanwhile, Site Isolation on Android currently focuses on protecting only high-value sites, to keep performance overheads low. Today, we are announcing two Site Isolation improvements that will protect more sites for our Android users. Starting in Chrome 92, Site Isolation will apply to sites where users log in via third-party providers, as well as sites that carry Cross-Origin-Opener-Policy headers.

Our ongoing goal with Site Isolation for Android is to offer additional layers of security without adversely affecting the user experience for resource-constrained devices. Site Isolation for all sites continues to be too costly for most Android devices, so our strategy is to improve heuristics for prioritizing sites that benefit most from added protection. So far, Chrome has been isolating sites where users log in by entering a password. However, many sites allow users to authenticate on a third-party site (for example, sites that offer "Sign in with Google"), possibly without the user ever typing in a password. This is most commonly accomplished with the industry-standard OAuth protocol. Starting in Chrome 92, Site Isolation will recognize common OAuth interactions and protect sites relying on OAuth-based login, so that user data is safe however a user chooses to authenticate.

Additionally, Chrome will now trigger Site Isolation based on the new Cross-Origin-Opener-Policy (COOP) response header. Supported since Chrome 83, this header allows operators of security-conscious websites to request a new browsing context group for certain HTML documents. This allows the document to better isolate itself from untrustworthy origins, by preventing attackers from referencing or manipulating the site's top-level window. It’s also one of the headers required to use powerful APIs such as SharedArrayBuffers. Starting in Chrome 92, Site Isolation will treat non-default values of the COOP header on any document as a signal that the document's underlying site may have sensitive data and will start isolating such sites. Thus, site operators who wish to ensure their sites are protected by Site Isolation on Android can do so by serving COOP headers on their sites.

As before, Chrome stores newly isolated sites locally on the device and clears the list whenever users clear their browsing history or other site data. Additionally, Chrome places certain restrictions on sites isolated by COOP to keep the list focused on recently-used sites, prevent it from growing overly large, and protect it from misuse (e.g., by requiring user interaction on COOP sites before adding them to the list). We continue to require a minimum RAM threshold (currently 2GB) for these new Site Isolation modes. With these considerations in place, our data suggests that the new Site Isolation improvements do not noticeably impact Chrome's overall memory usage or performance, while protecting many additional sites with sensitive user data.

Given these improvements in Site Isolation on Android, we have also decided to disable V8 runtime mitigations for Spectre on Android. These mitigations are less effective than Site Isolation and impose a performance cost. Disabling them brings Android on par with desktop platforms, where they have been turned off since Chrome 70. We advise that sites wanting to protect data from Spectre should consider serving COOP headers, which will in turn trigger Site Isolation.

Users who desire the most complete protection for their Android devices may manually opt in to full Site Isolation via chrome://flags/#enable-site-per-process, which will isolate all websites but carry higher memory cost.

Rust/C++ interop in the Android Platform

One of the main challenges of evaluating Rust for use within the Android platform was ensuring we could provide sufficient interoperability with our existing codebase. If Rust is to meet its goals of improving security, stability, and quality Android-wide, we need to be able to use Rust anywhere in the codebase that native code is required. To accomplish this, we need to provide the majority of functionality platform developers use. As we discussed previously, we have too much C++ to consider ignoring it, rewriting all of it is infeasible, and rewriting older code would likely be counterproductive as the bugs in that code have largely been fixed. This means interoperability is the most practical way forward.

Before introducing Rust into the Android Open Source Project (AOSP), we needed to demonstrate that Rust interoperability with C and C++ is sufficient for practical, convenient, and safe use within Android. Adding a new language has costs; we needed to demonstrate that Rust would be able to scale across the codebase and meet its potential in order to justify those costs. This post will cover the analysis we did more than a year ago while we evaluated Rust for use in Android. We also present a follow-up analysis with some insights into how the original analysis has held up as Android projects have adopted Rust.

Language interoperability in Android

Existing language interoperability in Android focuses on well defined foreign-function interface (FFI) boundaries, which is where code written in one programming language calls into code written in a different language. Rust support will likewise focus on the FFI boundary as this is consistent with how AOSP projects are developed, how code is shared, and how dependencies are managed. For Rust interoperability with C, the C application binary interface (ABI) is already sufficient.

Interoperability with C++ is more challenging and is the focus of this post. While both Rust and C++ support using the C ABI, it is not sufficient for idiomatic usage of either language. Simply enumerating the features of each language results in an unsurprising conclusion: many concepts are not easily translatable, nor do we necessarily want them to be. After all, we’re introducing Rust because many features and characteristics of C++ make it difficult to write safe and correct code. Therefore, our goal is not to consider all language features, but rather to analyze how Android uses C++ and ensure that interop is convenient for the vast majority of our use cases.

We analyzed code and interfaces in the Android platform specifically, not codebases in general. While this means our specific conclusions may not be accurate for other codebases, we hope the methodology can help others to make a more informed decision about introducing Rust into their large codebase. Our colleagues on the Chrome browser team have done a similar analysis, which you can find here.

This analysis was not originally intended to be published outside of Google: our goal was to make a data-driven decision on whether or not Rust was a good choice for systems development in Android. While the analysis is intended to be accurate and actionable, it was never intended to be comprehensive, and we’ve pointed out a couple of areas where it could be more complete. However, we also note that initial investigations into these areas showed that they would not significantly impact the results, which is why we decided to not invest the additional effort.

Methodology

Exported functions from Rust and C++ libraries are where we consider interop to be essential. Our goals are simple:

  • Rust must be able to call functions from C++ libraries and vice versa.
  • FFI should require a minimum of boilerplate.
  • FFI should not require deep expertise.

While making Rust functions callable from C++ is a goal, this analysis focuses on making C++ functions available to Rust so that new Rust code can be added while taking advantage of existing implementations in C++. To that end, we look at exported C++ functions and consider existing and planned compatibility with Rust via the C ABI and compatibility libraries. Types are extracted by running objdump on shared libraries to find external C++ functions they use1 and running c++filt to parse the C++ types. This gives functions and their arguments. It does not consider return values, but a preliminary analysis2 of those revealed that they would not significantly affect the results.

We then classify each of these types into one of the following buckets:

Supported by bindgen

These are generally simple types involving primitives (including pointers and references to them). For these types, Rust’s existing FFI will handle them correctly, and Android’s build system will auto-generate the bindings.

Supported by cxx compat crate

These are handled by the cxx crate. This currently includes std::string, std::vector, and C++ methods (including pointers/references to these types). Users simply have to define the types and functions they want to share across languages and cxx will generate the code to do that safely.

Native support

These types are not directly supported, but the interfaces that use them have been manually reworked to add Rust support. Specifically, this includes types used by AIDL and protobufs.

We have also implemented a native interface for StatsD as the existing C++ interface relies on method overloading, which is not well supported by bindgen and cxx3. Usage of this system does not show up in the analysis because the C++ API does not use any unique types.

Potential addition to cxx

This is currently common data structures such as std::optional and std::chrono::duration and custom string and vector implementations.

These can either be supported natively by a future contribution to cxx, or by using its ExternType facilities. We have only included types in this category that we believe are relatively straightforward to implement and have a reasonable chance of being accepted into the cxx project.

We don't need/intend to support

Some types are exposed in today’s C++ APIs that are either an implicit part of the API, not an API we expect to want to use from Rust, or are language specific. Examples of types we do not intend to support include:

  • Mutexes - we expect that locking will take place in one language or the other, rather than needing to pass mutexes between languages, as per our coarse-grained philosophy.
  • native_handle - this is a JNI interface type, so it is inappropriate for use in Rust/C++ communication.
  • std::locale& - Android uses a separate locale system from C++ locales. This type primarily appears in output due to e.g., cout usage, which would be inappropriate to use in Rust.

Overall, this category represents types that we do not believe a Rust developer should be using.

HIDL

Android is in the process of deprecating HIDL and migrating to AIDL for HALs for new services.We’re also migrating some existing implementations to stable AIDL. Our current plan is to not support HIDL, preferring to migrate to stable AIDL instead. These types thus currently fall into the “We don't need/intend to support'' bucket above, but we break them out to be more specific. If there is sufficient demand for HIDL support, we may revisit this decision later.

Other

This contains all types that do not fit into any of the above buckets. It is currently mostly std::string being passed by value, which is not supported by cxx.

Top C++ libraries

One of the primary reasons for supporting interop is to allow reuse of existing code. With this in mind, we determined the most commonly used C++ libraries in Android: liblog, libbase, libutils, libcutils, libhidlbase, libbinder, libhardware, libz, libcrypto, and libui. We then analyzed all of the external C++ functions used by these libraries and their arguments to determine how well they would interoperate with Rust.

Overall, 81% of types are in the first three categories (which we currently fully support) and 87% are in the first four categories (which includes those we believe we can easily support). Almost all of the remaining types are those we believe we do not need to support.

Mainline modules

In addition to analyzing popular C++ libraries, we also examined Mainline modules. Supporting this context is critical as Android is migrating some of its core functionality to Mainline, including much of the native code we hope to augment with Rust. Additionally, their modularity presents an opportunity for interop support.

We analyzed 64 binaries and libraries in 21 modules. For each analyzed library we examined their used C++ functions and analyzed the types of their arguments to determine how well they would interoperate with Rust in the same way we did above for the top 10 libraries.

Here 88% of types are in the first three categories and 90% in the first four, with almost all of the remaining being types we do not need to handle.

Analysis of Rust/C++ Interop in AOSP

With almost a year of Rust development in AOSP behind us, and more than a hundred thousand lines of code written in Rust, we can now examine how our original analysis has held up based on how C/C++ code is currently called from Rust in AOSP.4

The results largely match what we expected from our analysis with bindgen handling the majority of interop needs. Extensive use of AIDL by the new Keystore2 service results in the primary difference between our original analysis and actual Rust usage in the “Native Support” category.

A few current examples of interop are:

  • Cxx in Bluetooth - While Rust is intended to be the primary language for Bluetooth, migrating from the existing C/C++ implementation will happen in stages. Using cxx allows the Bluetooth team to more easily serve legacy protocols like HIDL until they are phased out by using the existing C++ support to incrementally migrate their service.
  • AIDL in keystore - Keystore implements AIDL services and interacts with apps and other services over AIDL. Providing this functionality would be difficult to support with tools like cxx or bindgen, but the native AIDL support is simple and ergonomic to use.
  • Manually-written wrappers in profcollectd - While our goal is to provide seamless interop for most use cases, we also want to demonstrate that, even when auto-generated interop solutions are not an option, manually creating them can be simple and straightforward. Profcollectd is a small daemon that only exists on non-production engineering builds. Instead of using cxx it uses some small manually-written C wrappers around C++ libraries that it then passes to bindgen.

Conclusion

Bindgen and cxx provide the vast majority of Rust/C++ interoperability needed by Android. For some of the exceptions, such as AIDL, the native version provides convenient interop between Rust and other languages. Manually written wrappers can be used to handle the few remaining types and functions not supported by other options as well as to create ergonomic Rust APIs. Overall, we believe interoperability between Rust and C++ is already largely sufficient for convenient use of Rust within Android.

If you are considering how Rust could integrate into your C++ project, we recommend doing a similar analysis of your codebase. When addressing interop gaps, we recommend that you consider upstreaming support to existing compat libraries like cxx.

Acknowledgements

Our first attempt at quantifying Rust/C++ interop involved analyzing the potential mismatches between the languages. This led to a lot of interesting information, but was difficult to draw actionable conclusions from. Rather than enumerating all the potential places where interop could occur, Stephen Hines suggested that we instead consider how code is currently shared between C/C++ projects as a reasonable proxy for where we’ll also likely want interop for Rust. This provided us with actionable information that was straightforward to prioritize and implement. Looking back, the data from our real-world Rust usage has reinforced that the initial methodology was sound. Thanks Stephen!

Also, thanks to:

  • Andrei Homescu and Stephen Crane for contributing AIDL support to AOSP.
  • Ivan Lozano for contributing protobuf support to AOSP.
  • David Tolnay for publishing cxx and accepting our contributions.
  • The many authors and contributors to bindgen.
  • Jeff Vander Stoep and Adrian Taylor for contributions to this post.


  1. We used undefined symbols of function type as reported by objdump to perform this analysis. This means that any header-only functions will be absent from our analysis, and internal (non-API) functions which are called by header-only functions may appear in it. 

  2. We extracted return values by parsing DWARF symbols, which give the return types of functions. 

  3. Even without automated binding generation, manually implementing the bindings is straightforward. 

  4. In the case of handwritten C/C++ wrappers, we analyzed the functions they call, not the wrappers themselves. For all uses of our native AIDL library, we analyzed the types used in the C++ version of the library. 

New protections for Enhanced Safe Browsing users in Chrome

In 2020 we launched Enhanced Safe Browsing, which you can turn on in your Chrome security settings, with the goal of substantially increasing safety on the web. These improvements are being built on top of existing security mechanisms that already protect billions of devices. Since the initial launch, we have continuously worked behind the scenes to improve our real-time URL checks and apply machine learning models to warn on previously-unknown attacks. As a result, Enhanced Safe Browsing users are successfully phished 35% less than other users. Starting with Chrome 91, we will roll out new features to help Enhanced Safe Browsing users better choose their extensions, as well as offer additional protections against downloading malicious files on the web.

Chrome extensions - Better protection before installation

Every day millions of people rely on Chrome extensions to help them be more productive, save money, shop or simply improve their browser experience. This is why it is important for us to continuously improve the safety of extensions published in the Chrome Web Store. For instance, through our integration with Google Safe Browsing in 2020, the number of malicious extensions that Chrome disabled to protect users grew by 81%. This comes on top of a number of improvements for more peace of mind when it comes to privacy and security.

Enhanced Safe Browsing will now offer additional protection when you install a new extension from the Chrome Web Store. A dialog will inform you if an extension you’re about to install is not a part of the list of extensions trusted by Enhanced Safe Browsing.

Any extensions built by a developer who follows the Chrome Web Store Developer Program Policies, will be considered trusted by Enhanced Safe Browsing. For new developers, it will take at least a few months of respecting these conditions to become trusted. Eventually, we strive for all developers with compliant extensions to reach this status upon meeting these criteria. Today, this represents nearly 75% of all extensions in the Chrome Web Store and we expect this number to keep growing as new developers become trusted.

Improved download protection

Enhanced Safe Browsing will now offer you even better protection against risky files.

bad_file.exe may be dangerous. Send to Google for scanning?When you download a file, Chrome performs a first level check with Google Safe Browsing using metadata about the downloaded file, such as the digest of the contents and the source of the file, to determine whether it’s potentially suspicious. For any downloads that Safe Browsing deems risky, but not clearly unsafe, Enhanced Safe Browsing users will be presented with a warning and the ability to send the file to be scanned for a more in depth analysis (pictured above).

If you choose to send the file, Chrome will upload it to Google Safe Browsing, which will scan it using its static and dynamic analysis classifiers in real time. After a short wait, if Safe Browsing determines the file is unsafe, Chrome will display a warning. As always, you can bypass the warning and open the file without scanning. Uploaded files are deleted from Safe Browsing a short time after scanning.