Author Archives: Kimberly Samra

Announcing the launch of GUAC v0.1



Today, we are announcing the launch of the v0.1 version of Graph for Understanding Artifact Composition (GUAC). Introduced at Kubecon 2022 in October, GUAC targets a critical need in the software industry to understand the software supply chain. In collaboration with Kusari, Purdue University, Citi, and community members, we have incorporated feedback from our early testers to improve GUAC and make it more useful for security professionals. This improved version is now available as an API for you to start developing on top of, and integrating into, your systems.

The need for GUAC

High-profile incidents such as Solarwinds, and the recent 3CX supply chain double-exposure, are evidence that supply chain attacks are getting more sophisticated. As highlighted by the U.S. Executive Order on Cybersecurity, there’s a critical need for security professionals, CISOs, and security engineers to be able to more deeply link information from different supply chain ecosystems to keep up with attackers and prevent exposure. Without linking different sources of information, it’s impossible to have a clear understanding of the potential risks posed by the software components in an organization. 




GUAC aggregates software security metadata and maps it to a standard vocabulary of concepts relevant to the software supply chain. This data can be accessed via a GraphQL interface, allowing development of a rich ecosystem of integrations, command-line tools, visualizations, and policy engines. 




We hope that GUAC will help the wider software development community better evaluate the supply chain security posture of their organizations and projects. Feedback from early adopters has been overwhelmingly positive: 




“At Yahoo, we have found immense value and significant efficiency by utilizing the open source project GUAC. GUAC has allowed us to streamline our processes and increase efficiency in a way that was not possible before,” said Hemil Kadakia, Sr. Mgr. Software Dev Engineering, Paranoids, Yahoo.

The power of GUAC

Dynamic aggregation

GUAC is not just a static database—it is the first application that is continuously evolving the database pertaining to the software that an organization develops or uses. Supply chains change daily, and by aggregating your Software Bill of Materials (SBOMs) and Supply-chain Levels for Software Artifacts (SLSA) attestations with threat intelligence sources (e.g., OSV vulnerability feeds) and OSS insights (e.g., deps.dev), GUAC is constantly incorporating the latest threat information and deeper analytics to help paint a more complete picture of your risk profile. And by merging external data with internal private metadata, GUAC brings the same level of reasoning to a company’s first-party software portfolio.




Seamless integration of incomplete metadata

Because of the complexity of the modern software stack—often spanning languages and toolchains—we discovered during GUAC development that it is difficult to produce high-quality SBOMs that are accurate, complete, and meet specifications and intents. 




Following the U.S. Executive Order on Cybersecurity, there are now a large number of SBOM documents being generated during release and build workflows to explain to consumers what’s in their software. Given the difficulty in producing accurate SBOMs, consumers often face a situation where they have incomplete, inaccurate, or conflicting SBOMs. In these situations, GUAC can fill in the gaps in the various supply chain metadata: GUAC can link the documents and then use heuristics to improve the quality of data and guess at the correct intent. Additionally, the GUAC community is now working closely with SPDX to advance SBOM tooling and improve the quality of metadata. 

  





GUAC's process for incorporating and enriching metadata for organizational insight

Consistent interfaces

Alongside the boom in SBOM production, there’s been a rapid expansion of new standards, document types, and formats, making it hard to perform consistent queries. The multiple formats for software supply chain metadata often refer to similar concepts, but with different terms. To integrate these, GUAC defines a common vocabulary for talking about the software supply chain—for example, artifacts, packages, repositories, and the relationships between them. 




This vocabulary is then exposed as a GraphQL API, empowering users to build powerful integrations on top of GUAC’s knowledge graph. For example, users are able to query seamlessly with the same commands across different SBOM formats like SPDX and CycloneDX. 




According to Ed Warnicke, Distinguished Engineer at Cisco Systems, "Supply chain security is increasingly about making sense of many different kinds of metadata from many different sources. GUAC knits all of that information together into something understandable and actionable." 


Potential integrations

Based on these features, we envision potential integrations that users can build on top of GUAC in order to:


  • Create policies based on trust

  • Quickly react to security compromises 

  • Determine an upgrade plan in response to a security incident

  • Create visualizers for data explorations, CLI tools for large scale analysis and incident response, CI checks, IDE plugins to shift policy left, and more




Developers can also build data source integrations under GUAC to expand its coverage. The entire GUAC architecture is plug-and-play, so you can write data integrations to get:


  • Supply chain metadata from new sources like your preferred security vendors

  • Parsers to translate this metadata into the GUAC ontology

  • Database backends to store the GUAC data in either common databases or in organization-defined private data stores




GUAC's GraphQL query API enables a diverse ecosystem of tooling




Dejan Bosanac, an engineer at Red Hat and an active contributor to the GUAC project, further described GUAC’s ingestion abilities, “With mechanisms to ingest and certify data from various sources and GraphQL API to later query those data, we see it as a good foundation for our current and future SSCS efforts. Being a true open source initiative with a welcoming community is just a plus.” 



Next steps

Google is committed to making GUAC the best metadata synthesis and aggregation tool for security professionals. GUAC contributors are excited to meet at our monthly community calls and look forward to seeing demos of new applications built with GUAC.




“At Kusari, we are proud to have joined forces with Google's Open Source Security Team and the community to create and build GUAC,” says Tim Miller, CEO of Kusari. “With GUAC, we believe in the critical role it plays in safeguarding the software supply chain and we are dedicated to ensuring its success in the ecosystem.” 




Google is preparing SBOMs for consumption by the US Federal Government following EO 14028, and we are internally ingesting our SBOM catalog into GUAC to gather early insights. We encourage you to do the same with the GUAC release and submit your feedback. If the API is not flexible enough, please let us know how we can extend it. You can also submit suggestions and feedback on GUAC development or use cases, either by emailing [email protected] or filing an issue on our GitHub repository.




We hope you'll join us in this journey with GUAC!

$22k awarded to SBFT ‘23 fuzzing competition winners




Google’s Open Source Security Team recently sponsored a fuzzing competition as part of ISCE’s Search-Based and Fuzz Testing (SBFT) Workshop. Our goal was to encourage the development of new fuzzing techniques, which can lead to the discovery of software vulnerabilities and ultimately a safer open source ecosystem. 



The competitors’ fuzzers were judged on code coverage and their ability to discover bugs: 



Competitors were evaluated using FuzzBench, Google’s open source platform for testing and comparing fuzzers. The platform boasts a wide range of real world benchmarks and vulnerabilities, allowing researchers to test their fuzzers in an authentic environment. We hope the results of the SBFT fuzzing competition will lead to more efficient fuzzers and eventually newly discovered vulnerabilities. 



A closer look at our winners

Eight teams submitted fuzzers to the final competition and an additional four industry fuzzers (AFL++, libFuzzer, Honggfuzz, and AFL) were included as controls to represent current practice. 




HasteFuzz, is a modification of the widely used AFL++ fuzzer. HasteFuzz filters out potentially duplicate inputs to increase efficiency, making it able to cover more code in the 23-hour test window because it is not likely to be retracing its steps. AFL++ is already a strong fuzzer—it had the best code coverage of the industry fuzzers tested in this competition—and HasteFuzz’s filtering took it to the next level.

PASTIS makes use of multiple fuzzing engines that can independently cover different program locations, allowing PASTIS to find bugs quickly. AFLrustrust rewrites AFL++ on top of LibAFL, which is a library of features that allows you to customize existing fuzzers. AFLrustrust effectively prunes redundant test cases, improving its bug finding efficiency. Both PASTIS and AFLrustrust found 8 out of 15 possible bugs, with each fuzzer missing only one bug discovered by others. They both outperformed the industry fuzzers, which found 7 or fewer bugs under the same constraints.




Additional competitors, such as AFL+++ and AFLSmart++, also showed improvements over the industry controls, a result we had hoped for with the competition.



Fuzzing research continues

The innovation and improvement shown through the SBFT fuzzing competition is one example of why we have invested in the FuzzBench project. Since its launch in 2020, FuzzBench has significantly contributed to high-quality fuzzing research, conducting over 900 experiments and discussed in more than 100 academic papers. FuzzBench was provided as a resource for the SBFT competition, but it is also available to researchers every day as a service. If you are interested in testing your fuzzers on FuzzBench, please see our guide to adding your fuzzer.




FuzzBench is in active development. We’d welcome feedback from any current or prospective FuzzBench users, your responses to this survey can help us plan the future of FuzzBench.




The Google Open Source Security Team would like to thank the ISCE conference and the SBFT workshop for hosting the fuzzing competition. We also want to thank each participant for their hard work. Together, we continue to push the boundaries of software security and create a safer, more robust open source ecosystem. 

Introducing a new way to buzz for eBPF vulnerabilities





Today, we are announcing Buzzer, a new eBPF Fuzzing framework that aims to help hardening the Linux Kernel.

What is eBPF and how does it verify safety?


eBPF is a technology that allows developers and sysadmins to easily run programs in a privileged context, like an operating system kernel. Recently, its popularity has increased, with more products adopting it as, for example, a network filtering solution. At the same time, it has maintained its relevance in the security research community, since it provides a powerful attack surface into the operating system.




While there are many solutions for fuzzing vulnerabilities in the Linux Kernel, they are not necessarily tailored to the unique features of eBPF. In particular, eBPF has many complex security rules that programs must follow to be considered valid and safe. These rules are enforced by a component of eBPF referred to as the "verifier". The correctness properties of the verifier implementation have proven difficult to understand by reading the source code alone. 

That’s why our security team at Google decided to create a new fuzzer framework that aims to test the limits of the eBPF verifier through generating eBPF programs.




The eBPF verifier’s main goal is to make sure that a program satisfies a certain set of safety rules, for example: programs should not be able to write outside designated memory regions, certain arithmetic operations should be restricted on pointers, and so on. However, like all pieces of software, there can be holes in the logic of these checks. This could potentially cause unsafe behavior of an eBPF program and have security implications.



Introducing Buzzer a new way to fuzz eBPF


Buzzer aims to detect these errors in the verifier’s validation logic by generating a high volume of eBPF programs – around 35k per minute. It then takes each generated program and runs it through the verifier. If the verifier thinks it is safe, then the program is executed in a running kernel to determine if it is actually safe. Errors in the runtime behavior are detected through instrumentation code added by Buzzer.




It is with this technique that Buzzer found its first issue, CVE-2023-2163, an error in the branch pruning logic of the eBPF verifier that can cause unsafe paths to be overlooked, thus leading to arbitrary reading and writing of kernel memory. This issue demonstrates not only the complexity in the task that the verifier tries to accomplish (to make sure a program is safe in an efficient manner), but also how Buzzer can help researchers uncover complex bugs by automatically exploring corner cases in the verifier’s logic.




Additionally, Buzzer includes an easy to use eBPF generation library that makes it unique from other eBPF, or other general purpose Linux kernel fuzzers. By focusing on this particular technology, Buzzer is allowed to tailor its strategies to the eBPF features.




We are excited about the contributions Buzzer will make to the overall hardening of the Linux Kernel by making the eBPF implementation safer. Our team plans to develop some new features, such as the ability to run eBPF programs across distributed VMs. 

Now that the code is open source, we are looking for contributors! If you have any interesting ideas for a feature we could implement in Buzzer, let us know in our GitHub repository.




We look forward to hearing your ideas and making eBPF safer together! Let the fuzzing begin.


Making authentication faster than ever: passkeys vs. passwords




In recognition of World Password Day 2023, Google announced its next step toward a passwordless future: passkeys. 



Passkeys are a new, passwordless authentication method that offer a convenient authentication experience for sites and apps, using just a fingerprint, face scan or other screen lock. They are designed to enhance online security for users. Because they are based on the public key cryptographic protocols that underpin security keys, they are resistant to phishing and other online attacks, making them more secure than SMS, app based one-time passwords and other forms of multi-factor authentication (MFA). And since passkeys are standardized, a single implementation enables a passwordless experience across browsers and operating systems. 



Passkeys can be used in two different ways: on the same device or from a different device. For example, if you need to sign in to a website on an Android device and you have a passkey stored on that same device, then using it only involves unlocking the phone. On the other hand, if you need to sign in to that website on the Chrome browser on your computer, you simply scan a QR code to connect the phone and computer to use the passkey.



The technology behind the former (“same device passkey”) is not new: it was originally developed within the FIDO Alliance and first implemented by Google in August 2019 in select flows. Google and other FIDO members have been working together on enhancing the underlying technology of passkeys over the last few years to improve their usability and convenience. This technology behind passkeys allows users to log in to their account using any form of device-based user verification, such as biometrics or a PIN code. A credential is only registered once on a user’s personal device, and then the device proves possession of the registered credential to the remote server by asking the user to use their device’s screen lock. 



The user’s biometric, or other screen lock data, is never sent to Google’s servers - it stays securely stored on the device, and only cryptographic proof that the user has correctly provided it is sent to Google. Passkeys are also created and stored on your devices and are not sent to websites or apps. If you create a passkey on one device the Google Password Manager can make it available on your other devices that are signed into the same system account.





Learn more on how passkey works under the hood in our Google Security Blog.






Emerging Google data shows promise for a passwordless future with passkeys


Passkeys were originally designed to provide simpler and more secure authentication experiences for users, and so far, the technology has proven to be simpler and faster than passwords. Google data (March-April 2023) shows how the percentage of users successfully authenticating through same device passkeys is 4x higher than the success rate typically achieved with passwords: average authentication success rate with passwords is 13.8%, while local passkey success rate is 63.8% (see figure 1 below). 



Passkeys are not just easier to use, but also significantly faster than passwords. On average, a user can successfully sign in within 14.9 seconds, while it typically takes twice as long to sign in with passwords (30.4 seconds, as seen in Figure 2 below). Preliminary, qualitative data collected from user research also indicates that  users already perceive this convenience as the key value of passkeys.





Figure 1: authentication success rate with passkey vs password. Data from March-April 2023 (n≈100M)





Figure 2: time spent authenticating with passkey vs password (data from March-April 2023). Dashed, vertical lines indicate average duration for each authentication method (n≈100M) 





We are excited to share this data following our launch of passkeys for Google Accounts. Passkeys are faster, more secure, and more convenient than passwords and MFA, making them a desirable alternative to passwords and a promising development in the journey to a more secure future. To learn more about passkeys and how to turn a basic form-based username and password sign-in system into one that supports passkeys, check out the documentation on developers.google.com/identity/passkeys.  

Introducing rules_oci


Today, we are announcing the General Availability 1.0 version of rules_oci, an open-sourced Bazel plugin (“ruleset”) that makes it simpler and more secure to build container images with Bazel. This effort was a collaboration we had with Aspect and the Rules Authors Special Interest Group. In this post, we’ll explain how rules_oci differs from its predecessor, rules_docker, and describe the benefits it offers for both container image security and the container community.


Bazel and Distroless for supply chain security


Google’s popular build and test tool, known as Bazel, is gaining fast adoption within enterprises thanks to its ability to scale to the largest codebases and handle builds in almost any language. Because Bazel manages and caches dependencies by their integrity hash, it is uniquely suited to make assurances about the supply chain based on the Trust-on-First-Use principle. One way Google uses Bazel is to build widely used Distroless base images for Docker. 



Distroless is a series of minimal base images which improve supply-chain security. They restrict what's in your runtime container to precisely what's necessary for your app, which is a best practice employed by Google and other tech companies that have used containers in production for many years. Using minimal base images reduces the burden of managing risks associated with security vulnerabilities, licensing, and governance issues in the supply chain for building applications.



rules_oci vs rules_docker


Historically, building container images was supported by rules_docker, which is now in maintenance mode. The new ruleset, called rules_oci, is better suited for Distroless as well as most Bazel container builds for several reasons:


  • The Open Container Initiative standard has changed the playing field, and there are now multiple container runtimes and image formats. rules_oci is not tied to running a docker daemon already installed on the machine.

  • rules_docker was created before many excellent container manipulation tools existed, such as Crane, Skopeo, and Zot. rules_oci is able to simply rely on trusted third-party toolchains and avoid building or maintaining any Bazel-specific tools.

  • rules_oci doesn’t include any language-specific rules, which makes it much more maintainable than rules_docker. Also, it avoids the pitfalls of stale dependencies on other language rulesets.


Other benefits of rules_oci


There are other great features of rules_oci to highlight as well. For example, it uses Bazel’s downloader to fetch layers from a remote registry, improving caching and allowing transparent use of a private registry. Multi-architecture images make it more convenient to target platforms like ARM-based servers, and support Windows Containers as well. Code signing allows users to verify that a container image they use was created by the developer who signed it, and was not modified by any third-party along the way (e.g. person-in-the-middle attack). In combination with the work on Bazel team’s roadmap, you’ll also get a Software Bill of Materials (SBOM) showing what went into the container you use.




Since adopting rules_oci and Bazel 6, the Distroless team has seen a number of improvements to our build processes, image outputs, and security metadata:


  • Native support for signing allows us to eliminate a race condition that could have left some images unsigned. We now sign on immutable digests references to images during the build instead of tags after the build.

  • Native support for oci indexes (multi platform images) allowed us to remove our dependency on docker during build. This also means more natural and debuggable failures when something goes wrong with multi platform builds.

  • Improvements to fetching and caching means our CI builds are faster and more reliable when using remote repositories.

  • Distroless images are now accompanied by SBOMs embedded in a signed attestation, which you can view with cosign and some jq magic:






cosign download attestation gcr.io/distroless/base:latest-amd64 | jq -rcs '.[0].payload' | base64 -d | jq -r '.predicate' | jq





In the end, rules_oci allowed us to modernize the Distroless build while also adding necessary supply chain security metadata to allow organizations to make better decisions about the images they consume.

Get started with rules_oci


Today, we’re happy to announce that rules_oci is now a 1.0 version. This stability guarantee follows the semver standard, and promises that future releases won’t include breaking public API changes. Aspect provides resources for using rules_oci, such as a Migration guide from rules_docker. It also provides support, training, and consulting services for effectively adopting rules_oci to build containers in all languages.



If you use rules_docker today, or are considering using Bazel to build your containers, this is a great time to give rules_oci a try. You can help by filing actionable issues, contributing code, or donating to the Rules Authors SIG OpenCollective. Since the project is developed and maintained entirely as community-driven open source, your support is essential to keeping the project healthy and responsive to your needs.







Special thanks to Sahin Yort and Alex Eagle from Aspect. 


So long passwords, thanks for all the phish



Starting today, you can create and use passkeys on your personal Google Account. When you do, Google will not ask for your password or 2-Step Verification (2SV) when you sign in.




Passkeys are a more convenient and safer alternative to passwords. They work on all major platforms and browsers, and allow users to sign in by unlocking their computer or mobile device with their fingerprint, face recognition or a local PIN.




Using passwords puts a lot of responsibility on users. Choosing strong passwords and remembering them across various accounts can be hard. In addition, even the most savvy users are often misled into giving them up during phishing attempts. 2SV (2FA/MFA) helps, but again puts strain on the user with additional, unwanted friction and still doesn’t fully protect against phishing attacks and targeted attacks like "SIM swaps" for SMS verification. Passkeys help address all these issues.






Creating passkeys on your Google Account


When you add a passkey to your Google Account, we will start asking for it when you sign in or perform sensitive actions on your account. The passkey itself is stored on your local computer or mobile device, which will ask for your screen lock biometrics or PIN to confirm it's really you. Biometric data is never shared with Google or any other third party – the screen lock only unlocks the passkey locally.




Unlike passwords, passkeys can only exist on your devices. They cannot be written down or accidentally given to a bad actor. When you use a passkey to sign in to your Google Account, it proves to Google that you have access to your device and are able to unlock it. Together, this means that passkeys protect you against phishing and any accidental mishandling that passwords are prone to, such as being reused or exposed in a data breach. This is stronger protection than most 2SV (2FA/MFA) methods offer today, which is why we allow you to skip not only the password but also 2SV when you use a passkey. In fact, passkeys are strong enough that they can stand in for security keys for users enrolled in our Advanced Protection Program.




Creating a passkey on your Google Account makes it an option for sign-in. Existing methods, including your password, will still work in case you need them, for example when using devices that don't support passkeys yet. Passkeys are still new and it will take some time before they work everywhere. However, creating a passkey today still comes with security benefits as it allows us to pay closer attention to the sign-ins that fall back to passwords. Over time, we'll increasingly scrutinize these as passkeys gain broader support and familiarity.






Using passkeys to sign in to your Google Account


Using passkeys does not mean that you have to use your phone every time you sign in. If you use multiple devices, e.g. a laptop, a PC or a tablet, you can create a passkey for each one. In addition, some platforms securely back your passkeys up and sync them to other devices you own. For example, if you create a passkey on your iPhone, that passkey will also be available on your other Apple devices if they are signed in to the same iCloud account. This protects you from being locked out of your account in case you lose your devices, and makes it easier for you to upgrade from one device to another.




If you want to sign in on a new device for the first time, or temporarily use someone else's device, you can use a passkey stored on your phone to do so. On the new device, you’d just select the option to "use a passkey from another device" and follow the prompts. This does not automatically transfer the passkey to the new device, it only uses your phone's screen lock and proximity to approve a one-time sign-in. If the new device supports storing its own passkeys, we will ask separately if you want to create one there.




In fact, if you sign in on a device shared with others, you should not create a passkey there. When you create a passkey on a device, anyone with access to that device and the ability to unlock it, can sign in to your Google Account. While that might sound a bit alarming, most people will find it easier to control access to their devices rather than maintaining good security posture with passwords and having to be on constant lookout for phishing attempts.




If you lose a device with a passkey for your Google Account and believe someone else can unlock it, you can immediately revoke the passkey in your account settings. If your device supports the option to remotely wipe it, consider doing that as well, especially if it also has passkeys for other services. We always recommend having a recovery phone and email on your account, as it increases your chance of recovering it in case someone gains access.




To start using passkeys on your personal Google Account today, visit g.co/passkeys.






How does this work under the hood?


The main ingredient of a passkey is a cryptographic private key – this is what is stored on your devices. When you create one, the corresponding public key is uploaded to Google. When you sign in, we ask your device to sign a unique challenge with the private key. Your device only does so if you approve this, which requires unlocking the device. We then verify the signature with your public key.




Your device also ensures the signature can only be shared with Google websites and apps, and not with malicious phishing intermediaries. This means you don't have to be as watchful with where you use passkeys as you would with passwords, SMS verification codes, etc. The signature proves to us that the device is yours since it has the private key, that you were there to unlock it, and that you are actually trying to sign in to Google and not some intermediary phishing site. The only data shared with Google for this to work is the public key and the signature. Neither contains any information about your biometrics.




The private key behind the passkey lives on your devices and in some cases, it stays only on the device it was created on. In other cases, your operating system or an app similar to a password manager may sync it to other devices you own. Passkey sync providers like the Google Password Manager and iCloud Keychain use end-to-end encryption to keep your passkeys private.




Since each passkey can only be used for a single account, there is no risk of reusing them across services. This means that your Google Account is safe from data breaches across your other accounts, and vice versa.




When you do need to use a passkey from your phone to sign in on another device, the first step is usually to scan a QR code displayed by that device. The device then verifies that your phone is in proximity using a small anonymous Bluetooth message and sets up an end-to-end encrypted connection to the phone through the internet. The phone uses this connection to deliver your one-time passkey signature, which requires your approval and the biometric or screen lock step on the phone. Neither the passkey itself nor the screen lock information is sent to the new device. The Bluetooth proximity check ensures remote attackers can’t trick you into releasing a passkey signature, for example by sending you a screenshot of a QR code from their own device.




Passkeys are built on the protocols and standards Google helped create in the FIDO Alliance and W3C WebAuthn working group. This means passkey support works across all platforms and browsers that adopt these standards. You can store the passkeys for your Google Account on any compatible device or service.




The same standards and protocols power security keys, our strongest offering for high risk accounts. Passkeys inherit many of their strong account protections from security keys, but with convenience that is suitable for everyone.




Today's launch is a big step in a cross-industry effort that we helped start more than 10 years ago, and we are committed to passkeys as the future of secure sign-in, for everyone. We hope that other web and app developers adopt passkeys and are able to use our deployment as a model. Developers can learn more about passkey support on our Chrome and Android platforms here.

Google Trust Services now offers TLS certificates for Google Domains customers



We’re excited to announce changes that make getting Google Trust Services TLS certificates easier for Google Domains customers. With this integration, all Google Domains customers will be able to acquire public certificates for their websites at no additional cost, whether the site runs on a Google service or uses another provider. Additionally, Google Domains is now making an API available to allow for DNS-01 challenges with Google Domains DNS servers to issue and renew certificates automatically.



Like the existing Google Cloud integration, Automatic Certificate Management Environment (ACME) protocol is used to enable seamless automatic lifecycle management of TLS certificates. 



These certificates are issued by the same Certificate Authority (CA) Google uses for its own sites, so they are widely supported across the entire spectrum of devices used to access your services.



How do I use it?



Using ACME ensures your certificates are renewed automatically and many hosting services already support ACME. If you're running your own web servers / services, there are ACME clients that integrate easily with common servers. To use this feature, you will need an API key called an External Account Binding key. This enables your certificate requests to be associated with your Google Domains account. You can get an API key by visiting Google Domains and navigating to the Security page for your domain. There you’ll see a section for Google Trust Services where you can get your EAB Key.



Example of EAB Credentials in Google Domains



As an example, with the popular Certbot ACME client, the configuration to register an account looks like:


certbot register --email <CONTACT_EMAIL> --no-eff-email --server "https://dv.acme-v02.api.pki.goog/directory"  --eab-kid "<EAB_KEY_ID>" --eab-hmac-key "<EAB_HMAC_KEY>"




The EAB_KEY_ID and EAB_HMAC_KEY are both provided on your Google Domains security page.



After the account is created, you may issue certificates by running:

certbot certonly -d <domain.com> --server "https://dv.acme-v02.api.pki.goog/directory" --standalone



Then follow the prompts to complete validation and download your certificate. If you need additional information please visit the Google Domains help center.



Google Domains and ACME DNS-01




ACME uses challenges to validate domain control before issuing certificates. The ACME DNS-01 challenge can be an efficient way for users to automate the validation process and integrate with existing websites and web hosting services.



Google Domains now provides an API for ACME DNS-01 challenges that helps streamline the process for users to authenticate domain control quickly and securely. This is now offered in some popular ACME clients like Certbot via this plugin, Caddy, Certify The Web, Posh-ACME. You can find additional information on the Google Domains site.






Example of DNS API Access Token in Google Domains



To set up automatic certificate provisioning with ACME and DNS-01, follow these steps:



  1. Sign in to Google Domains.
  2. Select the domain that you want to use.
  3. At the top left, click “Menu” and select “Security”.
  4. Under section “ACME DNS API”, click “Create token”.
  5. A dialog box will appear with an “API Token”. This is the API Token you will need to enter into your ACME client. You will need to copy this value and can do so by clicking the copy button next to the API Token. 
    • NOTE: This value is only shown once. After the dialog box is closed you  will not be able to see this API Token again. Store this token in a safe place, since anyone that has it gains the ability to modify some DNS TXT records for your Domain.  
    • If you did not save this value before closing the dialog box, you can easily delete and create a new API token.
    • A limit of 10 API tokens per domain can exist at a time. 
  6. Once the dialog box is closed you will be able to see in the list that the token has been created. You can delete this token at any time to revoke its access. 
  7. The API token can now be used in an ACME client that supports the Google Domains ACME DNS API. Each ACME client differs slightly on how to specify this API Token so you will need to read the documentation on your desired ACME client. 




Regardless of which ACME client you use, Google Domains and Google Trust Services are excited to offer a reliable option for no-cost TLS certificates. This continues the mission of helping build a safer internet by providing a transparent, trusted, and reliable Certificate Authority.

Our commitment to fighting invalid traffic on Connected TV



Connected TV (CTV) has not only transformed the entertainment world, it has also created a vibrant new platform for digital advertising. However, as with any innovative space, there are challenges that arise, including the emergence of bad actors aiming to siphon money away from advertisers and publishers through fraudulent or invalid ad traffic. Invalid traffic is an evolving challenge that has the potential to affect the integrity and health of digital advertising on CTV. However, there are steps the industry can take to combat invalid traffic and foster a clean, trustworthy, and sustainable ecosystem.



Information sharing and following best practices

Every player across the digital advertising ecosystem has the opportunity to help reduce the risk of CTV ad fraud. It starts by spreading awareness across the industry and building a commitment among partners to share best practices for defending against invalid traffic. Greater transparency and communication are crucial to creating lasting solutions.


One key best practice is contributing to and using relevant industry standards. We encourage CTV inventory providers to follow the CTV/OTT Device & App Identification Guidelines and IFA Guidelines. These guidelines, both of which were developed by the IAB Tech Lab, foster greater transparency, which in turn reduces the risk of invalid traffic on CTV. More information and details about using these resources can be found in the following guide: Protecting your ad-supported CTV experiences.



Collaborating on standards and solutions

No single company or industry group can solve this challenge on their own, we need to work collaboratively to solve the problem. Fortunately, we’re already seeing constructive efforts in this direction with industry-wide standards.


For example, the broad implementation of the IAB Tech Lab’s app-ads.txt and its web counterpart, ads.txt, have brought greater transparency to the digital advertising supply chain and have helped combat ad fraud by allowing advertisers to verify the sellers from whom they buy inventory. In 2021, the IAB Tech Lab extended the app-ads.txt standard to CTV in order to better protect and support CTV advertisers. This update is the first of several industry-wide steps that have been taken to further protect CTV advertising. In early 2022, the IAB Tech Lab released the ads.cert 2.0 “protocol suite,” along with a proposal to utilize this new standard to secure server-side connections (including for server-side ad insertion). Ads.cert 2.0 will also power future industry standards focused on securing the supply chain and preventing misrepresentation.


In addition to these efforts, the Media Rating Council (MRC) also engaged with stakeholders to develop its Server-Side Ad Insertion and OTT (Over-the-Top) Guidance, which provides a consistent set of guidelines specific to CTV for organizations that seek MRC accreditation for invalid traffic detection and filtration. We’re also seeing key partners tackle this challenge through informal working groups. For example, we collaborated with various CTV and security partners across our industry on a solution that allows companies to ensure video ad requests are coming from a valid Roku device


But more work is needed. Players across the digital advertising ecosystem need to continue to build momentum through opportunities and initiatives that enable further collaboration on solutions.



Our ongoing investment in invalid traffic defenses

At Google, we’ve been defending our ad systems against invalid traffic for nearly two decades. By striking the right balance between automation and human expertise, we’ve developed a comprehensive set of measures to respond to threats like botnets, click farms, domain misrepresentation, and more. We’re now applying a similar approach to minimize the risk of CTV ad fraud, balancing innovation with tried-and-true technologies.


We’ve developed a machine learning platform built on TensorFlow, which has enabled us to expand the amount of inventory we can review and scale our defenses against invalid traffic to include additional surfaces, such as CTV. While machine learning has allowed us to better analyze ad traffic in new and diverse ways, we’ve also continued to leverage the work of research analysts and industry experts to ensure our automated enforcement systems are running effectively on CTV.


In addition to setting up new defenses for CTV, we’re also taking a more conservative approach with the CTV inventory we make available. This ensures that we aren’t exposing advertisers to unnecessary risk while CTV standards and best practices continue to evolve and mature, and while their adoption by the industry increases. 



Evolving and adapting

We know that bad actors continuously evolve and adapt their methods to evade detection and enforcement of our policies. The tactics behind invalid traffic and ad fraud will inevitably become more sophisticated with the growth of CTV. However, if the industry pulls together, we’ll be in a better position to not only address these new threats head on, but stay one step ahead of them while building a CTV advertising ecosystem that is safe and sustainable for everyone.

Security of Passkeys in the Google Password Manager


We are excited to announce passkey support on Android and Chrome for developers to test today, with general availability following later this year. In this post we cover details on how passkeys stored in the Google Password Manager are kept secure. See our post on the Android Developers Blog for a more general overview.

Passkeys are a safer and more secure alternative to passwords. They also replace the need for traditional 2nd factor authentication methods such as text message, app based one-time codes or push-based approvals. Passkeys use public-key cryptography so that data breaches of service providers don't result in a compromise of passkey-protected accounts, and are based on industry standard APIs and protocols to ensure they are not subject to phishing attacks.

Passkeys are the result of an industry-wide effort. They combine secure authentication standards created within the FIDO Alliance and the W3C Web Authentication working group with a common terminology and user experience across different platforms, recoverability against device loss, and a common integration path for developers. Passkeys are supported in Android and other leading industry client OS platforms.

A single passkey identifies a particular user account on some online service. A user has different passkeys for different services. The user's operating systems, or software similar to today's password managers, provide user-friendly management of passkeys. From the user's point of view, using passkeys is very similar to using saved passwords, but with significantly better security.

The main ingredient of a passkey is a cryptographic private key. In most cases, this private key lives only on the user's own devices, such as laptops or mobile phones. When a passkey is created, only its corresponding public key is stored by the online service. During login, the service uses the public key to verify a signature from the private key. This can only come from one of the user's devices. Additionally, the user is also required to unlock their device or credential store for this to happen, preventing sign-ins from e.g. a stolen phone. 

To address the common case of device loss or upgrade, a key feature enabled by passkeys is that the same private key can exist on multiple devices. This happens through platform-provided synchronization and backup.

Passkeys in the Google Password Manager

On Android, the Google Password Manager provides backup and sync of passkeys. This means that if a user sets up two Android devices with the same Google Account, passkeys created on one device are available on the other. This applies both to the case where a user has multiple devices simultaneously, for example a phone and a tablet, and the more common case where a user upgrades e.g. from an old Android phone to a new one.

Passkeys in the Google Password Manager are always end-to-end encrypted: When a passkey is backed up, its private key is uploaded only in its encrypted form using an encryption key that is only accessible on the user's own devices. This protects passkeys against Google itself, or e.g. a malicious attacker inside Google. Without access to the private key, such an attacker cannot use the passkey to sign in to its corresponding online account.

Additionally, passkey private keys are encrypted at rest on the user's devices, with a hardware-protected encryption key.

Creating or using passkeys stored in the Google Password Manager requires a screen lock to be set up. This prevents others from using a passkey even if they have access to the user's device, but is also necessary to facilitate the end-to-end encryption and safe recovery in the case of device loss.

Recovering access or adding new devices

When a user sets up a new Android device by transferring data from an older device, existing end-to-end encryption keys are securely transferred to the new device. In some cases, for example, when the older device was lost or damaged, users may need to recover the end-to-end encryption keys from a secure online backup.

To recover the end-to-end encryption key, the user must provide the lock screen PIN, password, or pattern of another existing device that had access to those keys. Note, that restoring passkeys on a new device requires both being signed in to the Google Account and an existing device's screen lock.

Since screen lock PINs and patterns, in particular, are short, the recovery mechanism provides protection against brute-force guessing. After a small number of consecutive, incorrect attempts to provide the screen lock of an existing device, it can no longer be used. This number is always 10 or less, but for safety reasons we may block attempts before that number is reached. Screen locks of other existing devices may still be used.

If the maximum number of attempts is reached for all existing devices on file, e.g. when a malicious actor tries to brute force guess, the user may still be able to recover if they still have access to one of the existing devices and knows its screen lock. By signing in to the existing device and changing its screen lock PIN, password or pattern, the count of invalid recovery attempts is reset. End-to-end encryption keys can then be recovered on the new device by entering the new screen lock of the existing device.

Screen lock PINs, passwords or patterns themselves are not known to Google. The data that allows Google to verify correct input of a device's screen lock is stored on Google's servers in secure hardware enclaves and cannot be read by Google or any other entity. The secure hardware also enforces the limits on maximum guesses, which cannot exceed 10 attempts, even by an internal attack. This protects the screen lock information, even from Google.

When the screen lock is removed from a device, the previously configured screen lock may still be used for recovery of end-to-end encryption keys on other devices for a period of time up to 64 days. If a user believes their screen lock is compromised, the safer option is to configure a different screen lock (e.g. a different PIN). This disables the previous screen lock as a recovery factor immediately, as long as the user is online and signed in on the device.

Recovery user experience

If end-to-end encryption keys were not transferred during device setup, the recovery process happens automatically the first time a passkey is created or used on the new device. In most cases, this only happens once on each new device.

From the user's point of view, this means that when using a passkey for the first time on the new device, they will be asked for an existing device's screen lock in order to restore the end-to-end encryption keys, and then for the current device's screen lock or biometric, which is required every time a passkey is used.

Passkeys and device-bound private keys

Passkeys are an instance of FIDO multi-device credentials. Google recognizes that in certain deployment scenarios, relying parties may still require signals about the strong device binding that traditional FIDO credentials provide, while taking advantage of the recoverability and usability of passkeys.

To address this, passkeys on Android support the proposed Device-bound Public Key WebAuthn extension (devicePubKey). If this extension is requested when creating or using passkeys on Android, relying parties will receive two signatures in the result: One from the passkey private key, which may exist on multiple devices, and an additional signature from a second private key that only exists on the current device. This device-bound private key is unique to the passkey in question, and each response includes a copy of the corresponding device-bound public key.

Observing two passkey signatures with the same device-bound public key is a strong signal that the signatures are generated by the same device. On the other hand, if a relying party observes a device-bound public key it has not seen before, this may indicate that the passkey has been synced to a new device.

On Android, device-bound private keys are generated in the device's trusted execution environment (TEE), via the Android Keystore API. This provides hardware-backed protections against exfiltration of the device-bound private keys to other devices. Device-bound private keys are not backed up, so e.g. when a device is factory reset and restored from a prior backup, its device-bound key pairs will be different.

The device-bound key pair is created and stored on-demand. That means relying parties can request the devicePubKey extension when getting a signature from an existing passkey, even if devicePubKey was not requested when the passkey was created.