Tag Archives: Security

Bounds Checking Flexible Array Members

Buffer overflows are the cause of many security issues, and are a persistent thorn in programmers' sides. C is particularly susceptible to them. The advent of sanitizers mitigates some security issues by automatically inserting bounds checking, but they're not able to do so in all situations—in particular for flexible array members, because their size is known only at runtime.

The size of a flexible array member is typically opaque to the compiler. The alloc_size attribute on malloc() may be used for bounds checking flexible array members within the same function as the allocation. But the attribute's information isn't carried with the allocated object, making it impossible to perform bounds checking elsewhere.

To mitigate this drawback, Clang and GCC are introducing1 the counted_by attribute for flexible array members.


Specifying a flexible array member's element count

The number of elements allocated for a flexible array member is frequently stored in another field within the same structure. When applied to the flexible array member, the counted_by attribute is used by the sanitizer—enabled by -fsanitize=array-bounds—by explicitly referencing the field that stores the number of elements. The attribute creates an implicit relationship between the flexible array member and the count field enabling the array bounds sanitizer to verify flexible array operations.

There are some rules to follow when using this feature. For this structure:

struct foo {
	/* ... */
	size_t count; /* Number of elements in array */
	int array[] __attribute__((counted_by(count)));
};
  • The count field must be within the same non-anonymous, enclosing struct as the flexible array member.
  • The count field must be set before any array access.
  • The array field must have at least count number of elements available at all times.
  • The count field may change, but must never be larger than the number of elements originally allocated.

An example allocation of the above structure:

struct foo *foo_alloc(size_t count) {
  struct foo *ptr = NULL;
  size_t size = MAX(sizeof(struct foo),
                    offsetof(struct foo, array[0]) +
                        count * sizeof(p->array[0]));

  ptr = calloc(1, size);
  ptr->count = count;
  return ptr;
}

Uses for fortification

Fortification (enabled by the _FORTIFY_SOURCE macro) is an ongoing project to make the Linux kernel more secure. Its main focus is preventing buffer overflows on memory and string operations.

Fortification uses the __builtin_object_size() and __builtin_dynamic_object_size() builtins to try to determine if input passed into a function is valid (i.e. "safe"). A call to __builtin_dynamic_object_size() generally isn't able to take the size of a flexible array member into account. But with the counted_by attribute, we're able to calculate the size and improve safety.


Uses in the Linux kernel

The counted_by attribute is already in use in the Linux kernel, and will be instrumental in catching issues like integer overflows, which led to a heap buffer overflow. We want to expand its use to more flexible array members, and enforce its use in the future.


Conclusion

The counted_by attribute helps address a long-standing fortification road block where the memory bounds of a flexible array member couldn't be determined by the compiler, thus making Linux, and other hardened applications, less exploitable.

1In Clang v18.0.0 and GCC v15.0.0.

By Bill Wendling, Staff Software Engineer

Sustaining Digital Certificate Security – Entrust Certificate Distrust

The Chrome Security Team prioritizes the security and privacy of Chrome’s users, and we are unwilling to compromise on these values.

The Chrome Root Program Policy states that CA certificates included in the Chrome Root Store must provide value to Chrome end users that exceeds the risk of their continued inclusion. It also describes many of the factors we consider significant when CA Owners disclose and respond to incidents. When things don’t go right, we expect CA Owners to commit to meaningful and demonstrable change resulting in evidenced continuous improvement.

Over the past several years, publicly disclosed incident reports highlighted a pattern of concerning behaviors by Entrust that fall short of the above expectations, and has eroded confidence in their competence, reliability, and integrity as a publicly-trusted CA Owner.

In response to the above concerns and to preserve the integrity of the Web PKI ecosystem, Chrome will take the following actions.

Upcoming change in Chrome 127 and higher:

This approach attempts to minimize disruption to existing subscribers using a recently announced Chrome feature to remove default trust based on the SCTs in certificates.

Additionally, should a Chrome user or enterprise explicitly trust any of the above certificates on a platform and version of Chrome relying on the Chrome Root Store (e.g., explicit trust is conveyed through a Group Policy Object on Windows), the SCT-based constraints described above will be overridden and certificates will function as they do today.

To further minimize risk of disruption, website operators are encouraged to review the “Frequently Asked Questions" listed below.

Why is Chrome taking action?

Certification Authorities (CAs) serve a privileged and trusted role on the Internet that underpin encrypted connections between browsers and websites. With this tremendous responsibility comes an expectation of adhering to reasonable and consensus-driven security and compliance expectations, including those defined by the CA/Browser TLS Baseline Requirements.

Over the past six years, we have observed a pattern of compliance failures, unmet improvement commitments, and the absence of tangible, measurable progress in response to publicly disclosed incident reports. When these factors are considered in aggregate and considered against the inherent risk each publicly-trusted CA poses to the Internet ecosystem, it is our opinion that Chrome’s continued trust in Entrust is no longer justified.

When will this action happen?

Blocking action will begin on approximately November 1, 2024, affecting certificates issued at that point or later.

Blocking action will occur in Versions of Chrome 127 and greater on Windows, macOS, ChromeOS, Android, and Linux. Apple policies prevent the Chrome Certificate Verifier and corresponding Chrome Root Store from being used on Chrome for iOS.

What is the user impact of this action?

By default, Chrome users in the above populations who navigate to a website serving a certificate issued by Entrust or AffirmTrust after October 31, 2024 will see a full page interstitial similar to this one.

Certificates issued by other CAs are not impacted by this action.

How can a website operator tell if their website is affected?

Website operators can determine if they are affected by this issue by using the Chrome Certificate Viewer.

Use the Chrome Certificate Viewer

  • Navigate to a website (e.g., https://www.google.com)
  • Click the “Tune" icon
  • Click “Connection is Secure"
  • Click “Certificate is Valid" (the Chrome Certificate Viewer will open)
    • Website owner action is not required, if the “Organization (O)” field listed beneath the “Issued By" heading does not contain “Entrust" or “AffirmTrust”.
    • Website owner action is required, if the “Organization (O)” field listed beneath the “Issued By" heading contains “Entrust" or “AffirmTrust”.

What does an affected website operator do?

We recommend that affected website operators transition to a new publicly-trusted CA Owner as soon as reasonably possible. To avoid adverse website user impact, action must be completed before the existing certificate(s) expire if expiry is planned to take place after October 31, 2024.

While website operators could delay the impact of blocking action by choosing to collect and install a new TLS certificate issued from Entrust before Chrome’s blocking action begins on November 1, 2024, website operators will inevitably need to collect and install a new TLS certificate from one of the many other CAs included in the Chrome Root Store.

Can I test these changes before they take effect?

Yes.

A command-line flag was added beginning in Chrome 128 (available in Canary/Dev at the time of this post’s publication) that allows administrators and power users to simulate the effect of an SCTNotAfter distrust constraint as described in this blog post FAQ.

How to: Simulate an SCTNotAfter distrust

1. Close all open versions of Chrome

2. Start Chrome using the following command-line flag, substituting variables described below with actual values

--test-crs-constraints=$[Comma Separated List of Trust Anchor Certificate SHA256 Hashes]:sctnotafter=$[epoch_timestamp]

3. Evaluate the effects of the flag with test websites 

Example: The following command will simulate an SCTNotAfter distrust with an effective date of April 30, 2024 11:59:59 PM GMT for all of the Entrust trust anchors included in the Chrome Root Store. The expected behavior is that any website whose certificate is issued before the enforcement date/timestamp will function in Chrome, and all issued after will display an interstitial.

--test-crs-constraints=02ED0EB28C14DA45165C566791700D6451D7FB56F0B2AB1D3B8EB070E56EDFF5, 43DF5774B03E7FEF5FE40D931A7BEDF1BB2E6B42738C4E6D3841103D3AA7F339, 6DC47172E01CBCB0BF62580D895FE2B8AC9AD4F873801E0C10B9C837D21EB177, 73C176434F1BC6D5ADF45B0E76E727287C8DE57616C1E6E6141A2B2CBC7D8E4C, DB3517D1F6732A2D5AB97C533EC70779EE3270A62FB4AC4238372460E6F01E88, 0376AB1D54C5F9803CE4B2E201A0EE7EEF7B57B636E8A93C9B8D4860C96F5FA7, 0A81EC5A929777F145904AF38D5D509F66B5E2C58FCDB531058B0E17F3F0B41B, 70A73F7F376B60074248904534B11482D5BF0E698ECC498DF52577EBF2E93B9A, BD71FDF6DA97E4CF62D1647ADD2581B07D79ADF8397EB4ECBA9C5E8488821423 :sctnotafter=1714521599

Illustrative Command (on Windows):

"C:\Users\User123\AppData\Local\Google\Chrome SxS\Application\chrome.exe" --test-crs-constraints=02ED0EB28C14DA45165C566791700D6451D7FB56F0B2AB1D3B8EB070E56EDFF5,43DF5774B03E7FEF5FE40D931A7BEDF1BB2E6B42738C4E6D3841103D3AA7F339,6DC47172E01CBCB0BF62580D895FE2B8AC9AD4F873801E0C10B9C837D21EB177,73C176434F1BC6D5ADF45B0E76E727287C8DE57616C1E6E6141A2B2CBC7D8E4C,DB3517D1F6732A2D5AB97C533EC70779EE3270A62FB4AC4238372460E6F01E88,0376AB1D54C5F9803CE4B2E201A0EE7EEF7B57B636E8A93C9B8D4860C96F5FA7,0A81EC5A929777F145904AF38D5D509F66B5E2C58FCDB531058B0E17F3F0B41B,70A73F7F376B60074248904534B11482D5BF0E698ECC498DF52577EBF2E93B9A,BD71FDF6DA97E4CF62D1647ADD2581B07D79ADF8397EB4ECBA9C5E8488821423:sctnotafter=1714521599

Illustrative Command (on macOS):

"/Applications/Google Chrome Canary.app/Contents/MacOS/Google Chrome Canary" --test-crs-constraints=02ED0EB28C14DA45165C566791700D6451D7FB56F0B2AB1D3B8EB070E56EDFF5,43DF5774B03E7FEF5FE40D931A7BEDF1BB2E6B42738C4E6D3841103D3AA7F339,6DC47172E01CBCB0BF62580D895FE2B8AC9AD4F873801E0C10B9C837D21EB177,73C176434F1BC6D5ADF45B0E76E727287C8DE57616C1E6E6141A2B2CBC7D8E4C,DB3517D1F6732A2D5AB97C533EC70779EE3270A62FB4AC4238372460E6F01E88,0376AB1D54C5F9803CE4B2E201A0EE7EEF7B57B636E8A93C9B8D4860C96F5FA7,0A81EC5A929777F145904AF38D5D509F66B5E2C58FCDB531058B0E17F3F0B41B,70A73F7F376B60074248904534B11482D5BF0E698ECC498DF52577EBF2E93B9A,BD71FDF6DA97E4CF62D1647ADD2581B07D79ADF8397EB4ECBA9C5E8488821423:sctnotafter=1714521599

Note: If copy and pasting the above commands, ensure no line-breaks are introduced.

Learn more about command-line flags here.

I use Entrust certificates for my internal enterprise network, do I need to do anything?

Beginning in Chrome 127, enterprises can override Chrome Root Store constraints like those described for Entrust in this blog post by installing the corresponding root CA certificate as a locally-trusted root on the platform Chrome is running (e.g., installed in the Microsoft Certificate Store as a Trusted Root CA).

How do enterprises add a CA as locally-trusted?

Customer organizations should defer to platform provider guidance.

What about other Google products?

Other Google product team updates may be made available in the future.

Virtual Escape; Real Reward: Introducing Google’s kvmCTF


Google is committed to enhancing the security of open-source technologies, especially those that make up the foundation for many of our products, like Linux and KVM. To this end we are excited to announce the launch of kvmCTF, a vulnerability reward program (VRP) for the Kernel-based Virtual Machine (KVM) hypervisor first announced in October 2023.




KVM is a robust hypervisor with over 15 years of open-source development and is widely used throughout the consumer and enterprise landscape, including platforms such as Android and Google Cloud. Google is an active contributor to the project and we designed kvmCTF as a collaborative way to help identify & remediate vulnerabilities and further harden this fundamental security boundary. 




Similar to kernelCTF, kvmCTF is a vulnerability reward program designed to help identify and address vulnerabilities in the Kernel-based Virtual Machine (KVM) hypervisor. It offers a lab environment where participants can log in and utilize their exploits to obtain flags. Significantly, in kvmCTF the focus is on zero day vulnerabilities and as a result, we will not be rewarding exploits that use n-days vulnerabilities. Details regarding the  zero day vulnerability will be shared with Google after an upstream patch is released to ensure that Google obtains them at the same time as the rest of the open-source community.  Additionally, kvmCTF uses the Google Bare Metal Solution (BMS) environment to host its infrastructure. Finally, given how critical a hypervisor is to overall system security, kvmCTF will reward various levels of vulnerabilities up to and including code execution and VM escape.




How it works


The environment consists of a bare metal host running a single guest VM. Participants will be able to reserve time slots to access the guest VM and attempt to perform a guest-to-host attack. The goal of the attack must be to exploit a zero day vulnerability in the KVM subsystem of the host kernel. If successful, the attacker will obtain a flag that proves their accomplishment in exploiting the vulnerability. The severity of the attack will determine the reward amount, which will be based on the reward tier system explained below. All reports will be thoroughly evaluated on a case-by-case basis.




The rewards tiers are the following:

  • Full VM escape: $250,000

  • Arbitrary memory write: $100,000

  • Arbitrary memory read: $50,000

  • Relative memory write: $50,000

  • Denial of service: $20,000

  • Relative memory read: $10,000




To facilitate the relative memory write/read tiers and partly the denial of service, kvmCTF offers the option of using a host with KASAN enabled. In that case, triggering a KASAN violation will allow the participant to obtain a flag as proof.




How to participate


To begin, start by reading the rules of the program. There you will find information on how to reserve a time slot, connect to the guest and obtain the flags, the mapping of the various KASAN violations with the reward tiers and instructions on how to report a vulnerability, send us your submission, or contact us on Discord.

Hacking for Defenders: approaches to DARPA’s AI Cyber Challenge




The US Defense Advanced Research Projects Agency, DARPA, recently kicked off a two-year AI Cyber Challenge (AIxCC), inviting top AI and cybersecurity experts to design new AI systems to help secure major open source projects which our critical infrastructure relies upon. As AI continues to grow, it’s crucial to invest in AI tools for Defenders, and this competition will help advance technology to do so. 




Google’s OSS-Fuzz and Security Engineering teams have been excited to assist AIxCC organizers in designing their challenges and competition framework. We also playtested the competition by building a Cyber Reasoning System (CRS) tackling DARPA’s exemplar challenge. 




This blog post will share our approach to the exemplar challenge using open source technology found in Google’s OSS-Fuzz,  highlighting opportunities where AI can supercharge the platform’s ability to find and patch vulnerabilities, which we hope will inspire innovative solutions from competitors.



Leveraging OSS-Fuzz

AIxCC challenges focus on finding and fixing vulnerabilities in open source projects. OSS-Fuzz, our fuzz testing platform, has been finding vulnerabilities in open source projects as a public service for years, resulting in over 11,000 vulnerabilities found and fixed across 1200+ projects. OSS-Fuzz is free, open source, and its projects and infrastructure are shaped very similarly to AIxCC challenges. Competitors can easily reuse its existing toolchains, fuzzing engines, and sanitizers on AIxCC projects. Our baseline Cyber Reasoning System (CRS) mainly leverages non-AI techniques and has some limitations. We highlight these as opportunities for competitors to explore how AI can advance the state of the art in fuzz testing.



Fuzzing the AIxCC challenges

For userspace Java and C/C++ challenges, fuzzing with engines such as libFuzzer, AFL(++), and Jazzer is straightforward because they use the same interface as OSS-Fuzz.




Fuzzing the kernel is trickier, so we considered two options:



  • Syzkaller, an unsupervised coverage guided kernel fuzzer

  • A general purpose coverage guided fuzzer, such as AFL




Syzkaller has been effective at finding Linux kernel vulnerabilities, but is not suitable for AIxCC because Syzkaller generates sequences of syscalls to fuzz the whole Linux kernel, while AIxCC kernel challenges (exemplar) come with a userspace harness to exercise specific parts of the kernel. 




Instead, we chose to use AFL, which is typically used to fuzz userspace programs. To enable kernel fuzzing, we followed a similar approach to an older blog post from Cloudflare. We compiled the kernel with KCOV and KSAN instrumentation and ran it virtualized under QEMU. Then, a userspace harness acts as a fake AFL forkserver, which executes the inputs by executing the sequence of syscalls to be fuzzed. 




After every input execution, the harness read the KCOV coverage and stored it in AFL’s coverage counters via shared memory to enable coverage-guided fuzzing. The harness also checked the kernel dmesg log after every run to discover whether or not the input caused a KASAN sanitizer to trigger.




Some changes to Cloudflare’s harness were required in order for this to be pluggable with the provided kernel challenges. We needed to turn the harness into a library/wrapper that could be linked against arbitrary AIxCC kernel harnesses.




AIxCC challenges come with their own main() which takes in a file path. The main() function opens and reads this file, and passes it to the harness() function, which takes in a buffer and size representing the input. We made our wrapper work by wrapping the main() during compilation via $CC -Wl,--wrap=main harness.c harness_wrapper.a  




The wrapper starts by setting up KCOV, the AFL forkserver, and shared memory. The wrapper also reads the input from stdin (which is what AFL expects by default) and passes it to the harness() function in the challenge harness. 




Because AIxCC's harnesses aren't within our control and may misbehave, we had to be careful with memory or FD leaks within the challenge harness. Indeed, the provided harness has various FD leaks, which means that fuzzing it will very quickly become useless as the FD limit is reached.




To address this, we could either:


  • Forcibly close FDs created during the running of harness by checking for newly created FDs via /proc/self/fd before and after the execution of the harness, or

  • Just fork the userspace harness by actually forking in the forkserver. 




The first approach worked for us. The latter is likely most reliable, but may worsen performance.




All of these efforts enabled afl-fuzz to fuzz the Linux exemplar, but the vulnerability cannot be easily found even after hours of fuzzing, unless provided with seed inputs close to the solution.


Improving fuzzing with AI

This limitation of fuzzing highlights a potential area for competitors to explore AI’s capabilities. The input format being complicated, combined with slow execution speeds make the exact reproducer hard to discover. Using AI could unlock the ability for fuzzing to find this vulnerability quickly—for example, by asking an LLM to generate seed inputs (or a script to generate them) close to expected input format based on the harness source code. Competitors might find inspiration in some interesting experiments done by Brendan Dolan-Gavitt from NYU, which show promise for this idea.


Another approach: static analysis

One alternative to fuzzing to find vulnerabilities is to use static analysis. Static analysis traditionally has challenges with generating high amounts of false positives, as well as difficulties in proving exploitability and reachability of issues it points out. LLMs could help dramatically improve bug finding capabilities by augmenting traditional static analysis techniques with increased accuracy and analysis capabilities.


Proof of understanding (PoU)

Once fuzzing finds a reproducer, we can produce key evidence required for the PoU:

  1. The culprit commit, which can be found from git history bisection.

  2. The expected sanitizer, which can be found by running the reproducer to get the crash and parsing the resulting stacktrace.


Next step: “patching” via delta debugging

Once the culprit commit has been identified, one obvious way to “patch” the vulnerability is to just revert this commit. However, the commit may include legitimate changes that are necessary for functionality tests to pass. To ensure functionality doesn’t break, we could apply delta debugging: we progressively try to include/exclude different parts of the culprit commit until both the vulnerability no longer triggers, yet all functionality tests still pass.




This is a rather brute force approach to “patching.” There is no comprehension of the code being patched and it will likely not work for more complicated patches that include subtle changes required to fix the vulnerability without breaking functionality. 



Improving patching with AI

These limitations highlight a second area for competitors to apply AI’s capabilities. One approach might be to use an LLM to suggest patches. A 2024 whitepaper from Google walks through one way to build an LLM-based automated patching pipeline.




Competitors will need to address the following challenges:


  • Validating the patches by running crashes and tests to ensure the crash was prevented and the functionality was not impacted

  • Narrowing prompts to include only the functions present in the crashing stack trace, to fit prompt limitations

  • Building a validation step to filter out invalid patches




Using an LLM agent is likely another promising approach, where competitors could combine an LLM’s generation capabilities with the ability to compile and receive debug test failures or stacktraces iteratively.




Advancing security for everyone

Collaboration is essential to harness the power of AI as a widespread tool for defenders. As advancements emerge, we’ll integrate them into OSS-Fuzz, meaning that the outcomes from AIxCC will directly improve security for the open source ecosystem. We’re looking forward to the innovative solutions that result from this competition!

Staying Safe with Chrome Extensions

Chrome extensions can boost your browsing, empowering you to do anything from customizing the look of sites to providing personalized advice when you’re planning a vacation. But as with any software, extensions can also introduce risk.

That’s why we have a team whose only job is to focus on keeping you safe as you install and take advantage of Chrome extensions. Our team:

  • Provides you with a personalized summary of the extensions you’ve installed
  • Reviews extensions before they’re published on the Chrome Web Store
  • Continuously monitors extensions after they’re published

A summary of your extensions

The top of the extensions page (chrome://extensions) warns you of any extensions you have installed that might pose a security risk. (If you don’t see a warning panel, you probably don’t have any extensions you need to worry about.) The panel includes:

  • Extensions suspected of including malware
  • Extensions that violate Chrome Web Store policies
  • Extensions that have been unpublished by a developer, which might indicate that an extension is no longer supported
  • Extensions that aren’t from the Chrome Web Store
  • Extensions that haven’t published what they do with data they collect and other privacy practices

You’ll get notified when Chrome’s Safety Check has recommendations for you or you can check on your own by running Safety Check. Just type “run safety check” in Chrome’s address bar and select the corresponding shortcut: “Go to Chrome safety check.”

User flow of removing extensions highlighted by Safety Check.

Besides the Safety Check, you can visit the extensions page directly in a number of ways:

  • Navigate to chrome://extensions
  • Click the puzzle icon and choose “Manage extensions”
  • Click the More choices menu and choose menu > Extensions > Manage Extensions

Reviewing extensions before they’re published

Before an extension is even accessible to install from the Chrome Web Store, we have two levels of verification to ensure an extension is safe:

  1. An automated review: Each extension gets examined by our machine-learning systems to spot possible violations or suspicious behavior.
  2. A human review: Next, a team member examines the images, descriptions, and public policies of each extension. Depending on the results of both the automated and manual review, we may perform an even deeper and more thorough review of the code.

This review process weeds out the overwhelming majority of bad extensions before they even get published. In 2024, less than 1% of all installs from the Chrome Web Store were found to include malware. We're proud of this record and yet some bad extensions still get through, which is why we also monitor published extensions.

Monitoring published extensions

The same Chrome team that reviews extensions before they get published also reviews extensions that are already on the Chrome Web Store. And just like the pre-check, this monitoring includes both human and machine reviews. We also work closely with trusted security researchers outside of Google, and even pay researchers who report possible threats to Chrome users through our Developer Data Protection Rewards Program.

What about extensions that get updated over time, or are programmed to execute malicious code at a later date? Our systems monitor for that as well, by periodically reviewing what extensions are actually doing and comparing that to the stated objectives defined by each extension in the Chrome Web Store.

If the team finds that an extension poses a severe risk to Chrome users, it’s immediately remove from the Chrome Web Store and the extension gets disabled on all browsers that have it installed.

The extensions page highlights when you have a potentially unsafe extension downloaded

Others steps you can take to stay safe



Review new extensions before installing them

The Chrome Web Store provides useful information about each extension and its developer. The following information should help you decide whether it’s safe to install an extension:

  • Verified and featured badges are awarded by the Chrome team to extensions that follow our technical best practices and meet a high standard of user experience and design
  • Ratings and reviews from our users
  • Information about the developer
  • Privacy practices, including information about how an extension handles your data

Be careful of sites that try to quickly persuade you to install extensions, especially if the site has little in common with the extension.

Review extensions you’ve already installed

Even though Safety Check and your Extensions page (chrome://extensions) warn you of extensions that might pose a risk, it’s still a good idea to review your extensions from time to time.

  1. Uninstall extensions that you no longer use.
  2. Review the description of an extension in the Chrome Web Store, considering the extension’s ratings, reviews, and privacy practices — reviews can change over time.
  3. Compare an extension’s stated goals with 1) the permissions requested by an extension and 2) the privacy practices published by the extension. If requested permissions don’t align with stated goals, consider uninstalling the extension.
  4. Limit the sites an extension has permission to work on.

Enable Enhanced Protection

The Enhanced protection mode of Safe Browsing is Chrome’s highest level of protection that we offer. Not only does this mode provide you with the best protections against phishing and malware, but it also provides additional features targeted to keep you safe against potentially harmful extensions. Threats are constantly evolving and Safe Browsing’s Enhanced protection mode is the best way to ensure that you have the most advanced security features in Chrome. This can be enabled from the Safe Browsing settings page in Chrome (chrome://settings/security) and selecting “Enhanced”.

Staying Safe with Chrome Extensions

Chrome extensions can boost your browsing, empowering you to do anything from customizing the look of sites to providing personalized advice when you’re planning a vacation. But as with any software, extensions can also introduce risk.

That’s why we have a team whose only job is to focus on keeping you safe as you install and take advantage of Chrome extensions. Our team:

  • Provides you with a personalized summary of the extensions you’ve installed
  • Reviews extensions before they’re published on the Chrome Web Store
  • Continuously monitors extensions after they’re published

A summary of your extensions

The top of the extensions page (chrome://extensions) warns you of any extensions you have installed that might pose a security risk. (If you don’t see a warning panel, you probably don’t have any extensions you need to worry about.) The panel includes:

  • Extensions suspected of including malware
  • Extensions that violate Chrome Web Store policies
  • Extensions that have been unpublished by a developer, which might indicate that an extension is no longer supported
  • Extensions that aren’t from the Chrome Web Store
  • Extensions that haven’t published what they do with data they collect and other privacy practices

You’ll get notified when Chrome’s Safety Check has recommendations for you or you can check on your own by running Safety Check. Just type “run safety check” in Chrome’s address bar and select the corresponding shortcut: “Go to Chrome safety check.”

User flow of removing extensions highlighted by Safety Check.

Besides the Safety Check, you can visit the extensions page directly in a number of ways:

  • Navigate to chrome://extensions
  • Click the puzzle icon and choose “Manage extensions”
  • Click the More choices menu and choose menu > Extensions > Manage Extensions

Reviewing extensions before they’re published

Before an extension is even accessible to install from the Chrome Web Store, we have two levels of verification to ensure an extension is safe:

  1. An automated review: Each extension gets examined by our machine-learning systems to spot possible violations or suspicious behavior.
  2. A human review: Next, a team member examines the images, descriptions, and public policies of each extension. Depending on the results of both the automated and manual review, we may perform an even deeper and more thorough review of the code.

This review process weeds out the overwhelming majority of bad extensions before they even get published. In 2024, less than 1% of all installs from the Chrome Web Store were found to include malware. We're proud of this record and yet some bad extensions still get through, which is why we also monitor published extensions.

Monitoring published extensions

The same Chrome team that reviews extensions before they get published also reviews extensions that are already on the Chrome Web Store. And just like the pre-check, this monitoring includes both human and machine reviews. We also work closely with trusted security researchers outside of Google, and even pay researchers who report possible threats to Chrome users through our Developer Data Protection Rewards Program.

What about extensions that get updated over time, or are programmed to execute malicious code at a later date? Our systems monitor for that as well, by periodically reviewing what extensions are actually doing and comparing that to the stated objectives defined by each extension in the Chrome Web Store.

If the team finds that an extension poses a severe risk to Chrome users, it’s immediately remove from the Chrome Web Store and the extension gets disabled on all browsers that have it installed.

The extensions page highlights when you have a potentially unsafe extension downloaded

Others steps you can take to stay safe



Review new extensions before installing them

The Chrome Web Store provides useful information about each extension and its developer. The following information should help you decide whether it’s safe to install an extension:

  • Verified and featured badges are awarded by the Chrome team to extensions that follow our technical best practices and meet a high standard of user experience and design
  • Ratings and reviews from our users
  • Information about the developer
  • Privacy practices, including information about how an extension handles your data

Be careful of sites that try to quickly persuade you to install extensions, especially if the site has little in common with the extension.

Review extensions you’ve already installed

Even though Safety Check and your Extensions page (chrome://extensions) warn you of extensions that might pose a risk, it’s still a good idea to review your extensions from time to time.

  1. Uninstall extensions that you no longer use.
  2. Review the description of an extension in the Chrome Web Store, considering the extension’s ratings, reviews, and privacy practices — reviews can change over time.
  3. Compare an extension’s stated goals with 1) the permissions requested by an extension and 2) the privacy practices published by the extension. If requested permissions don’t align with stated goals, consider uninstalling the extension.
  4. Limit the sites an extension has permission to work on.

Enable Enhanced Protection

The Enhanced protection mode of Safe Browsing is Chrome’s highest level of protection that we offer. Not only does this mode provide you with the best protections against phishing and malware, but it also provides additional features targeted to keep you safe against potentially harmful extensions. Threats are constantly evolving and Safe Browsing’s Enhanced protection mode is the best way to ensure that you have the most advanced security features in Chrome. This can be enabled from the Safe Browsing settings page in Chrome (chrome://settings/security) and selecting “Enhanced”.

Common Expressions For Portable Policy and Beyond


I am thrilled to introduce Common Expression Language, a simple expression language that's great for writing small snippets of logic which can be run anywhere with the same semantics in nanoseconds to microseconds evaluation speed. The launch of cel.dev marks a major milestone in the growth and stability of the language.

It powers several well known products such as Kubernetes where it's used to protect against costly production misconfigurations:

    object.spec.replicas <= 5

Cloud IAM uses CEL to enable fine-grained authorization:

    request.path.startsWith("/finance")

Versatility

CEL is both open source and openly governed, making it well adopted both inside and outside Google. CEL is used by a number of large tech companies, either internally or as part of their public product offering. As a reflection of this open governance, cel.dev has been launched to share more about the language and the community around it.

So, what makes CEL a good choice for these applications? Why is CEL unique or different from Amazon's Cedar Policy or Open Policy Agent's Rego? These are great questions, and common ones (no pun intended):

  • Highly optimized evaluation O(ns) - O(μs)
  • Portable with stacks supported in C++, Java, and Go
  • Thousands of conformance tests ensure consistent behavior across stacks
  • Supports extension and subsetting

Subsetting is crucial for preserving predictable compute / memory impacts, and it only exists in CEL. As any latency-critical service maintainer will tell you, it's vital to have a clear understanding of compute and memory implications of any new feature. Imagine you've chosen an expression language, validated its functionality meets your security, serving, and scaling goals, but after launching an update to the library introduces new functionality which can't be disabled and leaves your product vulnerable to attack. Your alternatives are to fork the library and accept the maintenance costs, introduce custom validation logic which is likely to be insufficient to prevent abuse, or to redesign your service. CEL supported subsetting allows you to ensure that what was true at the initial product launch will remain true until you decide when to expose more of its functionality to customers.

Cedar Policy language was developed by Amazon. It is open source, implemented in Rust, and offers formal verification. Formal verification allows Cedar to validate policy correctness at authoring time. CEL is not just a policy language, but a broader expression language. It is used within larger policy projects in a way that allows users to augment their existing systems rather than adopt an entirely new one.

Formal verification often has challenges scaling to large amounts of data and is based on the belief that a formal proof of behavior is sufficient evidence of compliance. However, regulations and policies in natural language are often ambiguous. This means that logical translations of these policies are inherently ambiguous as well. CEL's approach to this challenge of compliance and reasoning about potentially ambiguous behaviors is to support evaluation over partial data at a very high speed and with minimal resources.

CEL is fast enough to be used in networking policies and expressive enough to be used in detailed application policies. Having a greater coverage of use cases improves your ability to reason about behavior holistically. But, what if you don't have all the data yet to know exactly how the policy will behave? CEL supports partial evaluation using four-valued logic which makes it possible to determine cases which are definitely allowed, denied, or where policy behavior is conditional on additional inputs. This allows for what-if analysis against historical data as well as against speculative data from new use cases or proposed policy changes.

Open Policy Agent's Rego is also open source, implemented in Golang and based on Datalog, which makes it possible to offer similar proofs as Cedar. However, the language is much more powerful than Cedar, and more powerful than CEL. This expressiveness means that OPA Rego is fantastic for non-latency critical, single-tenant solutions, but difficult to safely embed in existing offerings.


Four-valued Logic

CEL uses commutative logical operators that can render a true, false, error, or unknown status. This is a scalable alternative to formal verification and the expressiveness of Datalog. Four-valued logic allows CEL to evaluate over a partial set of inputs to deliver either a definitive result or communicate that more information is necessary to complete the evaluation.

What is four-valued logic?

True, false, and error outcomes are considered definitive: no additional information will change the outcome of the expression. Consider the following case:

    1/0 != 0 && false

In traditional programming languages, this expression would be an error; however, in CEL the outcome is false.

Now consider the following case where an input variable, called unknown_var is marked as unknown:

    unknown_var && true

The outcome of this expression is UNKNOWN{unknown_var} indicating that once the variable is known, the evaluation can be safely completed. An unknown indicates what was missing, and alerts the user to fix the outcome with more information. This technique both borrows from and extends SQL three-valued predicate logic which uses TRUE, FALSE, and NULL with commutative logical operators. From a CEL perspective, the error state is akin to SQL NULL that arises when there is an absence of information.


CEL compatibility with SQL

CEL leverages SQL semantics to ensure that it can be seamlessly translated to SQL. SQL optimizers perform significantly better over large data sets, making it possible to evaluate over data at rest. Imagine trying to scale a single expression evaluation over tens of millions of records. Even if the expression evaluates within a single microsecond, the evaluation would still take tens of seconds. The more complex the expression, the greater the latency. SQL excels at this use case, so translation from CEL to SQL was an important factor in the design in order to unlock the possibility of performant policy checks both online with CEL and offline with SQL.


Thank you CEL Community!

We’re proud to announce cel.dev as a major milestone in the maturity and adoption of the language, and we look forward to working with you to make CEL the best building block for writing latency-critical, portable logic. Feel free to contact us at [email protected]

By Tristan Swadell – Senior Staff Software Engineer

Time to challenge yourself in the 2024 Google CTF


It’s Google CTF time! Install your tools, commit your scripts, and clear your schedule. The competition kicks off on June 21 2024 6:00 PM UTC and runs through June 23 2024 6:00 PM UTC. Registration is now open at goo.gle/ctf.


Join the Google CTF (at goo.gle/ctf), a thrilling arena to showcase your technical prowess. The Google CTF consists of a set of computer security puzzles (or challenges) involving reverse-engineering, memory corruption, cryptography, web technologies, and more. Participants can use obscure security knowledge to find exploits through bugs and creative misuse, and with each completed challenge your team will earn points and move up through the ranks. 




The top 8 teams of the Google CTF will qualify for our Hackceler8 competition taking place in Málaga, Spain later this year as a part of our larger Escal8 event. Hackceler8 is our experimental esport-style hacking game competition, custom-made to mix CTF and speedrunning. 


Screenshot from last year’s Hackceler8 game




In the competition, teams need to find clever ways to abuse the game features to capture flags as quickly as possible.




Last year, teams assumed the role of Bartholomew (Mew for short), the fuzzy and adorable protagonist of Hackceler8 2023, set to defeat and overcome the evil rA.Ibbit taking over Silicon Valley! What adventures will Mew encounter this year? See the 2023 grand final to get a sense of the story and gameplay. The prize pool for this year’s Google CTF and Hackceler8 stands at more than $32,000.



Itching to get started early? Want to learn more, or get a leg up on the competition? Review challenges from previous years, including previous Hackceler8 matches, all open-sourced here. Or gain inspiration by binge watching hours of Hackceler8 2023 videos!




If you are just starting out in this space, check out our documentary H4CK1NG GOOGLE, it’s a great way to get acquainted with security. We also recommend checking out this year’s Beginner’s Quest that’ll be launching later this summer which will teach you some of the tools and tricks with simpler gamified challenges. For example, last year we explored hacking through time – you can use this to prepare for what’s yet to come.




Whether you’re a seasoned CTF player or just curious about cybersecurity and ethical hacking, we want to invite you to join us. Sign up for the Google CTF to expand your skill set, meet new friends in the security community, and even watch the pros in action. For the latest announcements, see goo.gle/ctf, subscribe to our mailing list, or follow us on Twitter @GoogleVRP. Interested in bug hunting for Google? Check out bughunters.google.com. See you there!


Enabling safe AI experiences on Google Play

Posted by Prabhat Sharma – Director, Trust and Safety, Play, Android, and Chrome

The rapid advancements in generative AI unlock opportunities for developers to create new immersive and engaging app experiences for users everywhere. In this time of fast-paced change, we are excited to continue enabling developers to create innovative, high-quality apps while maintaining the safe and trusted experience people expect from Google Play. Our goal is to make AI helpful for everyone, enriching the app ecosystem and enhancing user experiences.

Ensuring safety for apps with generative AI features

Over the past year, we’ve expanded our review capabilities to address new complexities that come with apps with generative AI features. We’re using new technology like large language models (LLMs) to quickly analyze app submissions, including vast amounts of text to identify potential issues like sexual content or hate speech, and flag them for people on our global team to take a closer look. This combination of human expertise and increased AI efficiency helps us improve the app review experience for developers and create a safer app environment for everyone.

Additionally, we have strengthened Play’s existing policies to address emerging concerns and feedback from users and developers, and keep pace with evolving technologies like generative AI. For example, last October, we shared that all generative AI apps must give users a way to report or flag offensive content without having to leave the app.

Building apps with generative AI features in a responsible way

Google Play's policies, which have long supported the foundation of our user safety efforts, are deeply rooted in a continuous collaboration between Play and developers. They provide a framework for responsible app development, and help ensure that Play remains a trusted platform around the world. As generative AI is still in its early stages, we have received feedback from developers seeking clarity on the requirements for apps on Play that feature AI-created content. Today we are responding to that feedback and providing guidance to help developers enhance the quality and safety of AI-powered apps, avoid potential issues or delays in app submissions, foster trust among users, and contribute to a thriving and responsible app ecosystem on Google Play:

    • Review Google Play policies: Google Play’s policies help us provide a safe and high-quality experience, therefore we don’t allow apps that feature generative AI that can be inappropriate or harmful to users. Make sure you review our AI-Generated Content Policy and ensure that any of your apps meet these requirements to avoid them being rejected or removed from Google Play. 

      In particular, apps that generate content using AI must:

        • Give users a way to report or flag offensive content. Monitoring and prioritizing user feedback is especially important for apps with generative AI features, where user interactions directly shape the content and experience.
      Moving image of AI Art Generator app UI experience on an Android mobile device
      Note: Images are examples and subject to change

    • Promote your app responsibly: Advertising your app is an important tool in growing your business, and it's critical to do it in a way that's safe and respectful of users. Ultimately you’re responsible for how your app is marketed and advertised, so review your marketing materials to ensure that your ads accurately represent your app's capabilities, and that all ads and promotional content associated with your app, across all platforms, meet our App Promotion requirements. For example, advertising your app for an inappropriate use case may result in it being removed from Google Play.

    • Rigorously test AI tools and models: You are accountable for the experience in your apps, so it’s critical for you to understand the underlying AI tools and models used to create media and to ensure that these tools are reliable and that the outputs are aligned with Google Play's policies and respect user safety and privacy. Be sure to test your apps across various user scenarios and safeguard them against prompts that could manipulate your generative AI feature to create harmful or offensive content. For example, you can use our closed testing feature to share early versions of your app and ask for specific feedback on if your users get generated results that they expect.

      This thorough understanding and testing especially applies to generative AI, so we recommend that you start documenting this testing because we may ask to review it in the future to help us better understand how you keep your users protected.

As the AI landscape evolves, we will continue to update our policies and developer tools to address emerging needs and complexities. This includes introducing new app onboarding capabilities in the future to make the process of submitting a generative AI app to Play even more transparent and streamlined. We’ll also share best practices and resources, like our People + AI Guidebook, to support developers in building innovative and responsible apps that enrich the lives of users worldwide.

As always, we're your partners in keeping users safe and are open to your feedback so we can build policies that help you lean into AI to scale your business on Play in ways that delight and protect our shared users.

On Fire Drills and Phishing Tests



In the late 19th and early 20th century, a series of catastrophic fires in short succession led an outraged public to demand action from the budding fire protection industry. Among the experts, one initial focus was on “Fire Evacuation Tests”. The earliest of these tests focused on individual performance and tested occupants on their evacuation speed, sometimes performing the tests “by surprise” as though the fire drill were a real fire. These early tests were more likely to result in injuries to the test-takers than any improvement in survivability. It wasn’t until introducing better protective engineering - wider doors, push bars at exits, firebreaks in construction, lighted exit signs, and so on - that survival rates from building fires began to improve. As protections evolved over the years and improvements like mandatory fire sprinklers became required in building code, survival rates have continued to improve steadily, and “tests” have evolved into announced, advanced training and posted evacuation plans.




In this blog, we will analyze the modern practice of Phishing “Tests” as a cybersecurity control as it relates to industry-standard fire protection practices.


Modern “Phishing tests” strongly resemble the early “Fire tests”


Google currently operates under regulations (for example, FedRAMP in the USA) that require us to perform annual “Phishing Tests.” In these mandatory tests, the Security team creates and sends phishing emails to Googlers, counts how many interact with the email, and educates them on how to “not be fooled” by phishing. These exercises typically collect reporting metrics on sent emails and how many employees “failed” by clicking the decoy link. Usually, further education is required for employees who fail the exercise. Per the FedRAMP pen-testing guidance doc: “Users are the last line of defense and should be tested.


These tests resemble the first “evacuation tests” that building occupants were once subjected to. They require individuals to recognize the danger, react individually in an ‘appropriate’ way, and are told that any failure is an individual failure on their part rather than a systemic issue. Worse, FedRAMP guidance requires companies to bypass or eliminate all systematic controls during the tests to ensure the likelihood of a person clicking on a phishing link is artificially maximized.


Among the harmful side effects of these tests:


  • There is no evidence that the tests result in fewer incidences of successful phishing campaigns;

    • Phishing (or more generically social engineering) remains a top vector for attackers establishing footholds at companies.

    • Research shows that these tests do not effectively prevent people from being fooled. This study with 14,000 participants showed a counterproductive effect of phishing tests, showing that “repeat clickers” will consistently fail tests despite recent interventions.

  • Some (e.g, FedRAMP) phishing tests require bypassing existing anti-phishing defenses. This creates an inaccurate perception of actual risks, allows penetration testing teams to avoid having to mimic actual modern attacker tactics, and creates a risk that the allowlists put in place to facilitate the test could be accidentally left in place and reused by attackers.

  • There has been a significantly increased load on Detection and Incident Response (D&R) teams during these tests, as users saturate them with thousands of needless reports. 

  • Employees are upset by them and feel security is “tricking them”, which degrades the trust with our users that is necessary for security teams to make meaningful systemic improvements and when we need employees to take timely actions related to actual security events.

  • At larger enterprises with multiple independent products, people can end up with numerous overlapping required phishing tests, causing repeated burdens.


But are users the last line of defense?

Training humans to avoid phishing or social engineering with a 100% success rate is a likely impossible task. There is value in teaching people how to spot phishing and social engineering so they can alert security to perform incident response. By ensuring that even a single user reports attacks in progress, companies can activate full-scope responses which are a worthwhile defensive control that can quickly mitigate even advanced attacks. But, much like the Fire Safety professional world has moved to regular pre-announced evacuation training instead of surprise drills, the information security industry should move toward training that de-emphasizes surprises and tricks and instead prioritizes accurate training of what we want staff to do the moment they spot a phishing email - with a particular focus on recognizing and reporting the phishing threat.



In short - we need to stop doing phishing tests and start doing phishing fire drills.


A “phishing fire drill” would aim to accomplish the following:

  • Educate our users about how to spot phishing emails

  • Inform the users on how to report phishing emails

  • Allow employees to practice reporting a phishing email in the manner that we would prefer, and

  • Collect useful metrics for auditors, such as:

    • The number of users who completed the practice exercise of reporting the email as a phishing email

    • The time between the email opening and the first report of phishing

    • Time of first escalation to the security team (and time delta)

    • Number of reports at 1 hour, 4 hours, 8 hours, and 24 hours post-delivery

Example 

When performing a phishing drill, someone would send an email announcing itself as a phishing email and with relevant instructions or specific tasks to perform. An example text is provided below.

Hello!  I am a Phishing Email. 


This is a drill - this is only a drill!


If I were an actual phishing email, I might ask you to log into a malicious site with your actual username or password, or I might ask you to run a suspicious command like <example command>. I might try any number of tricks to get access to your Google Account or workstation.


You can learn more about recognizing phishing emails at <LINK TO RESOURCE> and even test yourself to see how good you are at spotting them. Regardless of the form a phishing email takes, you can quickly report them to the security team when you notice they’re not what they seem.


To complete the annual phishing drill, please report me. To do that, <company-specific instructions on where to report phishing>.


Thanks for doing your part to keep <company> safe!


  1. Tricky. Phish, Ph.D





You can’t “fix” people, but you can fix the tools.

Phishing and Social Engineering aren’t going away as attack techniques. As long as humans are fallible and social creatures, attackers will have ways to manipulate the human factor. The more effective approach to both risks is a focused pursuit of secure-by-default systems in the long term, and a focus on investment in engineering defenses such as unphishable credentials (like passkeys) and implementing multi-party approval for sensitive security contexts throughout production systems. It’s because of investments in architectural defenses like these that Google hasn’t had to seriously worry about password phishing in nearly a decade.




Educating employees about alerting security teams of attacks in progress remains a valuable and essential addition to a holistic security posture. However, there’s no need to make this adversarial, and we don’t gain anything by “catching” people “failing” at the task. Let's stop engaging in the same old failed protections and follow the lead of more mature industries, such as Fire Protection, which has faced these problems before and already settled on a balanced approach.