Tag Archives: Best Practices

Introducing Widget Quality Tiers

Posted by Ivy Knight – Senior Design Advocate

Level up your app Widgets with new quality tiers

Widgets can be a powerful tool for engaging users and increasing the visibility of your app. They can also help you to improve the user experience by providing users with a more convenient way to access your app's content and features.

To build a great Android widget, it should be helpful, adaptive, and visually cohesive with the overall aesthetic of the device home screen.

In order to help you achieve a great widget, we are pleased to introduce Android Widget Quality Tiers!

The new Widget quality tiers are here to help guide you towards a best practice implementation of widgets, that will look great and bring your user’s value across the ecosystem of Android Phone, Tablets and Foldables.

What does this mean for widget makers?

Whether you are planning a new widget, or investing in an update to an existing widget, the Widget Quality Tiers will help you evaluate and plan for a high quality widget.

Just like Large Screen quality tiers help optimize app experiences, these Widget tiers guide you in creating great widgets across all Android devices. Now, similar tiers are being introduced for widgets to ensure they're not just functional, but also visually appealing and user-friendly.

Two screenshots of a phone display different views in the Google Play app. The first shows a list of running apps with the Widget filter applied in a search for 'Running apps'; the second shows the Nike Run Club app page.
Widgets that meet quality tier guidelines will be discoverable under the new Widget filter in Google Play.

Consider using our Canonical Widget layouts, which are based on Jetpack Glance components, to make it easier for you to design and build a Tier 1 widget your users will love.

Let’s take a look at the Widget Quality Tiers

There are three tiers built with required system defaults and suggested guidance to create an enhanced widget experience:

Tier 1: Differentiated

Four mockups show examples of Material Design 3 dynamic color applied to an app called 'Radio Hour'.
Differentiated widgets go further by implementing theming and adapting to resizing.

Tier 1 widgets are exemplary widgets offering hero experiences that are personalized, and create unique and productive homescreens. These widgets meet Tier 2 standards plus enhancements for layout, color, discovery, and system coherence criteria.

A stylized cartoon figure holds their chin thoughtfully while a chat bubble icon is highlighted
For example, use the system provided corner radius, and don’t set a custom corner radius on Widgets.

Add more personalization with dynamic color and generated previews while ensuring your widgets look good across devices by not overriding system defaults.

 Four mockups show examples of Material Design 3 components on Android: a contact card, a podcast player, a task list, and a news feed.
Tier 1 widgets that, from the top left, properly crop content, fill the layout bounds, have appropriately sized headers and touch targets, and make good use of colors and contrast.

Tier 2: Quality Standard

These widgets are helpful, usable, and provide a quality experience. They meet all criteria for layout, color, discovery, and content.

A simple to-do list app widget displays two tasks: 'Water plants' and 'Water more plants.' Both tasks have calendar icons next to them. The app is titled 'Plants' and has search and add buttons in the top right corner.
Make sure your widget has appropriate touch targets.

Tier 2 widgets are functional but simple, they meet the basic criteria for a usable app. But if you want to create a truly stellar experience for your users, tier 1 criteria introduce ways to make a more personal, interactive, and coherent widget.

Tier 3: Low Quality

These widgets don't meet the minimum quality bar and don't provide a great user experience, meaning they are not following or missing criteria from Tier 2.

 Examples of Material Design 3 widgets are displayed on a light pink background with stylized X shapes. Widgets include a podcast player, a contact card, to-do lists, and a music player.
Clockwise from the top left not filling the bounds, poorly cropped content, low color contrast, mis-sized header, and small touch targets.

A stylized cartoon person with orange hair, a blue shirt, holds a pencil to their cheek.  'Kacie' is written above them, with a cut off chat bubble icon.
For example, ensure content is visible and not cropped

Build and elevate your Android widgets with Widget Quality Tiers

Dive deeper into the widget quality tiers and start building widgets that not only look great but also provide an amazing user experience! Check out the official Android documentation for detailed information and best practices.


This blog post is part of our series: Spotlight Week on Widgets, where we provide resources—blog posts, videos, sample code, and more—all designed to help you design and create widgets. You can read more in the overview of Spotlight Week: Widgets, which will be updated throughout the week.

My review is stuck – what now?

Troubleshooting your code reviews

Google engineers contribute thousands of open source pull requests every month! With that experience, we've learned that a stuck review isn't the end of the world. Here are a few techniques for moving a change along when the frustration starts to wear on you.

You've been working on a pull request for weeks - it implements a feature you're excited about, and sure, you've hit a few roadbumps along the review path, but you've been dealing with them and making changes to your code, and surely all this hard work will pay off. But now, you're not sure what to do: your pull request hasn't moved forward in some time, you dread working on another revision just to be pushed back on, and you're not sure your reviewers are even on your side anymore. What happened? Your feature is so cool - how are you going to land it?

Reviews can stall for a number of reasons:

  • There may be multiple solutions to the problem you're trying to solve, but none of them are ideal - or, all of them are ideal, but it’s hard to tell which one is most ideal. Or, your reviewers don't agree on which solution is right.
  • You may not be getting any response from reviewers whatsoever. Either your previously active reviewers have disappeared, or you have had trouble attracting any in the first place.
  • The goalposts have moved; you thought you were just going to implement a feature, but then your reviewer pointed out that your tests are poor style, and in fact, the tests in that whole suite are poor style, and you need to refactor them all before moving forward. Or, you implemented it with one approach, and your reviewer asked for a different one; after a few rounds with that different approach, another reviewer suggests you try… the approach you started with in the first place!
  • The reviewer is asking you to make a change that you think is a bad idea, and you aren't convinced by their arguments. Or maybe it's the second reviewer who isn't convinced, and you aren't sure which side to take in their disagreement - and which approach to implement in your PR.
  • You think the reviewer's suggestion is a good one, but when you sit down to write the code, it's toilsome, or ugly, or downright impossible to make work.
  • The reviewer is pointing out what's wrong, but not explaining what they want instead; the reviews you are receiving aren't actionable, clear, or consistent.

These things happen all the time, especially in large projects where review consensus is a bit muddy. But you can't change the project overnight - and anyway, you just want your code to land! What can you do to get past these obstacles?


Step Back and Summarize the Discussion

As the author of your change, it's likely that you're thinking about this change all the time. You spent hours working on it; you spent hours looking through reviewer comments; you were personally impacted by the lack of this change in the codebase, because you ran into a bug or needed some functionality that didn't exist. It's easy to assume that your reviewers have the same context that you do - but they probably don't!

You can get a lot of mileage out of reminding your reviewers what the goal of your change is in the first place. It can be really helpful to take a step back and summarize the change and discussion so far for your reviewers. For the most impact, make sure you're covering these key points:


What is the goal of your pull request?

Avoid being too specific, as you can limit your options: the goal isn't to add a button that does X, the goal is to make users who need X feel less pain. If your goal is to solve a problem for your day job, make sure you explain that, too - don't just say "my company needs X," say "my company's employees use this project in this way with these scaling or resource constraints, and we need X in order to do this thing."

There will also be some goals inherent to the project. Most of the time, there is an unstated goal to keep the codebase maintainable and clean. You may need to ensure your new feature is accessible to users with disabilities. Maybe your new feature needs to perform well on low-end devices, or compile on a platform that you don't use.

Not all of these constraints will apply to every change - but some of them will be highly relevant to your work. Make sure you restate the constraints that need to be considered for this pull request.


What constraints have your reviewers stated?

During the life of the pull request until now, you've probably received feedback from at least one reviewer. What does it seem like the reviewer cares about? Beyond individual pieces of feedback, what are their core concerns? Is your reviewer asking whether your feature is at the appropriate level because they're in the middle of a long campaign to organize the project's architecture? Are they nitpicking your error messages because they care about translatability?

Try to summarize the root of the concerns your reviewers are discussing. By summarizing them in one place, it becomes pretty clear when there's a conflict; you can lay out constraints which mutually exclude each other here, and ask for help determining whether any may be worth discarding - or whether there's a way to meet both constraints that you hadn't considered yet.


What alternative approaches have you considered?

Before you wrote the code in the first place, you may have considered a couple different ways of approaching the problem. You probably discarded the paths not taken for a reason, but your reviewers may have never seen that reasoning. You may also have received feedback suggesting you reimplement your change with a different approach; you may or may not think that approach is better.

Summarize all the possible approaches that you considered, and some of their pros and cons. Even better, since you've already summarized the goals and constraints that have been uncovered so far during your review, you can weigh each approach against these criteria. Perhaps it turns out the approach you discarded early on is actually the most performant one, and performance is a higher priority than you had realized at first. The community may even chime in with an approach you hadn't considered at all, but which is much more architecturally acceptable to the project.


List assumptions and remaining open questions

Through the process of writing this recap, you may come to some conclusions. State those explicitly - don't rely on your reviewers to come to the same conclusions you did. Now you have all the reasoning out in the open to defend your point.

You may also be left with some open questions - like that two reviewers are in disagreement without a clear solution, or that you aren't sure how to meet a given constraint. List those questions, too, and ask for your co-contributors' help answering them.

Here's an example of a time the author did this on the Git project - it's not in a pull request, but the same techniques are used. Sometimes these summaries are so comprehensive that it's worth checking them into the project to remind future contributors how we arrived at certain decisions.


Pushing Back

Contributors - especially ones who are new to the project, but even long-time members - often feel like the reviewer has all the power and authority. Surely the wise veteran reviewing my code is infallible; whatever she says must be true and the best approach. But it's not so! Your reviewers are human, and they want to work together with you to come to a good solution.

Pushing back isn't always the answer. For example, if you disagree with the project's established style, or with a direction that's already been discussed and agreed upon (like avoiding adding new dependencies unless completely necessary), trying to push back against the will of the entire project is likely fruitless - especially if you're trying to get an exception for your PR. Or if the root of your disagreement is "it works on my machine, so who cares," you probably need to think a little more broadly before arguing with your reviewer who says the code isn't portable to another platform or maintainable in the long term. Reviewers and project maintainers have to live with your code for a long time; you may be only sending one patch before moving on to another project. So in matters of maintainability, it's often best to defer to the reviewer.

However, there are times when it could be the right move:

  • Your reviewer's top goal is avoiding code churn, but your top goal is a tidy end state.
  • When you try out the approach your reviewer recommended, but it is difficult for a reason they may not have anticipated.
  • When the reviewer's feedback truly doesn't have any improvements over your existing approach, and it's purely a matter of taste. (Use caution here - push back by asking for more justification, in case you missed something!)

Sometimes, after hearing pushback, your reviewer may reply pointing out something that you hadn't considered which converts you over to their way of thinking. This isn't a loss - you understand their goals better, and the quality of your PR goes up as a result. But sometimes your reviewer comes over to your side instead, and agrees that their feedback isn't actually necessary.


Lean on Reviewers for Help

Code reviews can feel a little bit like homework. You send your draft in; you get feedback on it; you make a bunch of changes on your own; rinse and repeat. But they don't have to be this way - it's much better to think of reviews as a team effort, where you and the reviewers are all pitching in to produce a high-quality code change. You're not being graded!

If implementing changes based on the feedback you received is difficult - like it seems that you'll need to do a large-scale refactor to make the function interface change you were asked for - or you're running into challenges that are hard to figure out on your own - like that one test failure doesn't make any sense - you can ask for more help! Just keep in mind the usual rules of asking for help online:

  • Use a lot of detail to describe where you're having trouble - cite the exact test that's failing or error case you're seeing.
  • Explain what you've tried so far, and what you think the issue is.
  • Include your in-progress code so that your reviewers can see how far you've gotten.
  • Be polite; you're asking for help, not demanding the author change their feedback or implement it for you.

Remember that if you need to, you can also ask your reviewer to switch to a different medium. It might be easier to debug a failing test over half an hour of instant messaging instead of over a week of GitHub comments; it might be easier to reason through a difficult API change with a video conference and a whiteboard. Your reviewer may be busy, but they're still a human, and they're invested in the success of your patch, too. (And if you don't think they're invested - did you try summarizing the issue, as described above? Make sure they understand where you're coming from!) Don't be afraid to ask for a bit more bandwidth if you think it'd be valuable.


Get a Second Opinion

When you're really, truly stuck - nobody agrees and your pushback isn't working and you met with your reviewer and they're stumped too - remember that most of the time, you and your reviewer aren't the only people working on the project. It's often possible to ask for a second opinion.

It may sound transactional, but it's okay to offer review-for-review: "Hi Alice, I know you're quite busy, but I see that you have a PR out now; I can make time to review it, but it'd help me out a lot if you can review mine, too. Bob and I are stuck because…." This can be a win for both you and Alice!

If you're stuck and can't come to a decision, you can also escalate. Escalation sounds scary, but it is a healthy part of a well-run project. You can talk to the project or subsystem maintainer, share a link to the conversation, and mention that you need help coming to a decision. Or, if your disagreement is with the maintainer, you can raise it with another trusted and experienced contributor the same way.

Sometimes you may also receive comments that feel particularly harsh or insulting, especially when they're coming from a contributor you've never spoken with face-to-face. In cases like this, it can be helpful to even just get someone else to read that comment and tell you if they find it rude, too. You can ask just about anybody - your friend, your teammate at your day job - but it's best if you ask someone who has some prior context with the project or with the person who made the comment. Or, if you'd rather dig, you can examine this same reviewer's comments on other contributions - you may find that there's a language barrier, or that they're incredibly active and are pressed for time to reply to your review, or that they're just an equal-opportunity jerk (or not-so-equal-opportunity), and your review isn't the exception.

If you can’t find another person to help, you can read through past closed and merged PRs to see the comments – do any of them apply to your PR, and perhaps the maintainer just hasn’t had time to give you the same feedback? Remember, this might be your first PR in the project but the maintainer might think “oh I’ve given this feedback a thousand times”.


When to Give Up

All these techniques help put more agency back in your hands as the author of a pull request, but they're not a sure thing. You might find that no matter how many of these techniques you use, you're still stuck - or that none of these techniques seem applicable. How do you know when to quit? And when you do want to, how can you do it without burning any bridges?


What did I want out of this PR, anyway?

There are many reasons to contribute to open source, and not all of us have the same motivations. Make sure you know why you sent this change. Were you personally or professionally affected by the bug or lack of feature? Did you want to learn what it's like to contribute to open source software? Do you love the community and just feel a need to give back to it?

No matter what your goal is, there's usually a way to achieve it without landing your pull request. If you need the change, it's always an option to fork - more on that in a moment. If you're wanting to try out contribution for the first time, but this project just doesn't seem to be working for it, you can learn from the experience and try out a different project instead - most projects love new contributors! If you're trying to help the community, but they don't want this change, then continuing to push it through isn't actually all that helpful.

Like when you re-summarized the review, or when you pushed back on your reviewer's opinions, think deeply about your own goals - and think whether this PR is the best way to meet them. Above all, if you're finding that you dread coming back to the project, and nothing is making that feeling go away, there's absolutely nothing wrong with moving on instead.


Taking a break

It's not necessary for you to definitively decide to stop working on your PR. You can take a break from it! Although you'll need to rebase your work when you come back to it, your idea won't go stale from a few weeks or months in timeout, and if any strong emotions (yours or the reviewers') were playing a role, they'll have calmed down by then. You may even come back to an email from your long-lost disappeared reviewer, saying they've been out on parental leave for the last two months and they're happy to get back to your change now. And once the idea has been seeded in the project, it's not unusual for community members to say weeks or months later, talking about a different problem, "Yeah, this would be a lot easier if we picked up something like <your abandoned pr>, actually. Maybe we should revive that effort."

These breaks can be helpful for everyone - but it's best practice to make sure the project knows you're stepping away. Leave a comment saying that you'll be quiet for a while; this way, a reviewer who may not know you feel stuck isn't left hanging waiting for you to resume your work.


Forking

For smaller projects, you often have the option of forking - a huge benefit of using open source software! You can decide whether or not to share your forked change with the world (in accordance with the license); "forking" can even mean that you just compile the project from source and use it with your own modifications on your local machine.

And if you were working on this PR for your day job, you can maintain a soft fork of the project with those changes applied. Google even has a tool for doing this efficiently with monorepos called Copybara.

Bonus: if you use your change yourself (or give it to your users) for a period of time, and come back to the PR after a bit of a break, you now have information about your code's proven stability in the wild; this can strengthen the case for your change when you resume work on it.


Giving up responsibly

If you do decide you're done with the effort, it's helpful to the project for you to say so explicitly. That way, if someone wants to take over your code change and finish pushing it through, they don't need to worry about stepping on your toes in the process. If you're comfortable doing so, it can be helpful to say why:

"After 15 iterations, I've decided to instead invest my spare time in something that is moving more quickly. If anybody wants to pick up this series and run with it, you have my blessing."
"It seems like Alice and I have an intractable disagreement. Since we're not able to resolve it, I'm going to let this change drop for now."
"My obligations at home are ramping up for the summer and I won't have time to continue working on this until at least September. If anybody wants to take over, feel free; I'll be available to provide review, but please don't block submission on my behalf, as I'll be quite busy."

Avoid the temptation to flounce, though. The above samples explain your circumstances and reasoning in a way that's easy to accept; it's less productive to say something like, "It's obvious that this project has no respect for its developers. Further work on this effort is a waste of my time!" Think about how you would react to reading such an email. You'd probably dismiss it as someone who's easily frustrated, maybe feel a little guilty, but ultimately, it's unlikely that you'd make any changes.


You Are the Main Driver of Your PR

In the end, it's important to remember that, ultimately, the author of a pull request is usually the one most invested in the success of that pull request. When you run into roadblocks, remember these techniques to overcome them, and know that you have power to help your code keep moving forward.

By Emily Shaffer – Staff Software Engineer

Emily Shaffer is a Staff Software Engineer at Google. They are the tech lead of Google's development efforts on the first-party Git client, and help advise Google on participation in other open source projects in the version control space, including jj, JGit, and Copybara. They formerly worked as a subsystem maintainer within the OpenBMC project. You can recognize Emily on most social media under the handle nasamuffin.

Android Support for Kotlin Multiplatform to Share Business Logic Across Mobile, Web, Server, and Desktop Platforms

Posted by Maru Ahues Bouza – Director, Product Management, and Jeffrey van Gogh – Director, Engineering

Traditionally, developers must either write code individually for each platform they want to target, or make a number of compromises in order to reuse code across platforms. Android has been actively supporting Kotlin since 2017, and today we are excited to announce we are supporting Kotlin Multiplatform on Android, which enables sharing code across mobile, web, server, and desktop platforms. This helps increase productivity for developers, and fits great with Android's Kotlin-first approach, resulting in higher quality Android apps. Our focus is to support sharing business logic (the parts that are most agnostic to the user interfaces) because we've seen Android developers get the most value in not having to maintain duplicate copies of this code.

Kotlin Multiplatform (KMP) has been a long-standing investment for the team behind Google Workspace, allowing for flexibility and speed in delivering valuable cross-platform experiences. The Google Workspace team is enthusiastic about KMP's potential as the direction for its multi-platform architecture investment, confident in its ability to meet performance expectations for various workloads.

The initial step in this journey is the rollout of the Google Docs app for Android, iOS, and Web, which leverages KMP for shared business logic, validating its readiness for production use at Google scale. The Google Workspace team is thrilled to continue exploring the possibilities of KMP across its product suite, aiming to enhance productivity and deliver seamless experiences to users on all platforms.

We see a lot of companies successfully leveraging Kotlin Multiplatform for cross-platform development of their apps, learn how they apply different code-sharing strategies here.

Kotlin Multiplatform, developed by JetBrains, provides a novel approach to sharing code across platforms by compiling Kotlin to platform-native binaries. Kotlin is able to provide the full, modern, memory managed language to native platforms enabling native interoperability and incremental adoption. Kotlin on Android, combined with Kotlin Multiplatform on other platforms, provides a great way to increase productivity and quality, without compromising on performance or interoperability.

Architecture overview for Kotlin Multiplatform (KMP)
Kotlin Multiplatform Architecture

Current Status of Support

Many widely-used libraries offer built-in support for Kotlin Multiplatform, streamlining your cross-platform development experience. These libraries work seamlessly together. For example, Ktor simplifies networking tasks by handling REST service consumption, while kotlinx.serialization converts data to formats like JSON, and Okio manages essential file I/O. Additionally, SKIE facilitates the use of modern types and coroutines on iOS, and CocoaPods integration enables the use of iOS-specific dependencies.

We've worked with JetBrains and the Kotlin developer community to add Kotlin Multiplatform support to a number of Jetpack libraries and in some cases provide the iOS platform targets, while in others, JetBrains and the community provide the multiplatform distributions.

Today, the Annotations, Collections, and DataStore libraries all have support for Kotlin Multiplatform in stable versions. We are also adding support to validate binary compatibility for the iOS platform targets, bringing them on a par with the quality standards for Android. In addition to the libraries above, we've also begun working on Kotlin Multiplatform support for Room, Lifecycle, and ViewModels with alpha versions now available. To better understand which classes and functions are available where, the library reference documentation now indicates "common" and platform support.

Indication of Common, Native and Android support in documentation
Indication of Common, Native and Android support in documentation

Android engineers have collaborated with JetBrains on the Kotlin compiler to improve runtime performance in Kotlin/Native (for iOS & native desktop operating systems), showing 18% runtime performance improvements in compiler benchmarks. In addition the Android team contributed to build time performance improvements for the Kotlin Native Compiler of up to 2x speed ups.

The Android Gradle Plugin now has official support for Kotlin Multiplatform, enabling a concise build definition for setting up Android as a platform target for shared code as shown below:

plugins {
    id("org.jetbrains.kotlin.multiplatform")
    id("com.android.library")
}

kotlin {
    androidTarget {
        compilations.all {
            kotlinOptions {
                jvmTarget = "11"
            }
        }
    }  
    listOf(
        iosX64(),
        iosArm64(),
        iosSimulatorArm64()
    ).forEach { iosTarget ->
        iosTarget.binaries.framework {
            baseName = "Shared"
            isStatic = true
        }
    }    
    sourceSets {
        commonMain.dependencies {
            // put your Multiplatform dependencies here
        }
    }
}
KMP Support in the Android Gradle Plugin DSL

As Android Studio is based on the IntelliJ Platform from JetBrains, it inherits support for Kotlin Multiplatform code editing and many other development features. Other Android development tools like Android Lint and Kotlin Symbol Processing (KSP) are also beginning to add more Kotlin Multiplatform support as well.

Google Chrome now has official support for WasmGC which is used by Kotlin Multiplatform's WebAssembly platform target to enable code sharing with the browser in an efficient and performant way.

Latest details on these projects are available on the updated Android Kotlin Multiplatform page.

Future Areas of Work

We've heard from many Android developers and Google engineering teams that they want expanded support for Kotlin Multiplatform so they can more easily share code with other platforms. Android plans to continue collaborating with JetBrains, Google engineering teams, and the community on a variety of projects, including:

    • Expanding and stabilizing Jetpack libraries with Kotlin Multiplatform support
    • Wasm platform target support in Jetpack libraries
    • Kotlin/Native build performance
    • Kotlin/Native debugging
    • Expanding Kotlin Multiplatform support in Android Studio

Learn More and Try It Out

Sharing code with Kotlin Multiplatform between Android and other platforms enables higher developer productivity and quality so we hope you will give it a try! You can use the Kotlin Multiplatform wizard to create a new KMP project. Learn more in the documentation.

Alternatively, explore one of these sample projects showcasing how to use some of the Jetpack libraries with Kotlin Multiplatform:

If there are additional areas you would like Android to work on let us know and also be a part of our vibrant Android Developer community on LinkedIn, Medium, YouTube, and X.

Achieving privacy compliance with your CI/CD: A guide for compliance teams

Posted by Fergus Hurley – Co-Founder & GM, Checks, and Evan Otero – Product Manager, Checks

In the fast-paced world of software development, Continuous Integration and Continuous Deployment (CI/CD) have become cornerstones, enabling teams to deliver high-quality software faster than ever. However, the rise of rapid innovation, increasing use of third-party libraries, and AI-generated code have accelerated vulnerabilities and risks. Therefore, addressing these issues early in the development lifecycle is essential so that teams can launch their products quickly and confidently.

The introduction of Checks privacy compliance CI/CD tooling feature represents a significant stride towards addressing these concerns, by reducing manual intervention and automating compliance and privacy standards as part of a release cycle.

In this post, we explore the meaning of CI/CD for compliance team members unfamiliar with this technology and how Checks can weave privacy and compliance protection practices into that pipeline.


What is CI/CD?

Continuous Integration (CI) and Continuous Deployment (CD) are foundational practices in modern software development. They enable development teams to increase efficiency, improve quality, and accelerate delivery.

Continuous Integration (CI) automatically integrates code changes from multiple contributors into a software project. This practice enables teams to detect problems early by running automated tests on each change before it is merged into the main branch.

Graphic showing CI/CD continuous cycle

Continuous Deployment (CD) takes automation further by automatically deploying all code changes to a testing or production environment after the build stage. This means that, in addition to automated testing, automated release processes ensure that new changes are accessible to users as quickly as possible.


Shifting issue-spotting left with CI/CD pipelines

The automation of CI/CD processes is typically called “pipelines.” CI/CD pipelines automate the steps software changes go through, from development to deployment. These steps include compiling code, running tests (unit tests, integration tests, etc.), security scans, and more. If all automated tests pass, the changes go live without human intervention in a specific environment, such as testing or production.

These pipelines are designed to catch issues as early as possible, embodying the practice known as “shifting left.” The benefits of “shifting left”, particularly when applied through CI/CD pipelines, include:

  • Improved quality and security: Automated testing in CI/CD pipelines ensures that code is rigorously tested for functional and compliance issues before it reaches production. This early detection enables teams to address vulnerabilities and errors when they are generally easier and less costly to fix.
  • Faster release cycles: By catching and addressing issues early, teams avoid the bottlenecks associated with late-stage discovery of problems. This efficiency reduces the time from development to deployment, enabling faster release cycles and more responsive delivery of features and fixes.
  • Reduced costs: Detecting issues later in the development process can be significantly more expensive to resolve, especially if they're found after deployment. Early detection through CI/CD pipelines minimizes these costs by preventing complex rollbacks and the need for emergency fixes in production environments.
  • Increased reliability and trust: Software that undergoes thorough testing before release is generally more reliable and secure. This reliability builds trust among users and stakeholders, crucial for maintaining a positive reputation and ensuring user satisfaction.

Checks brings privacy and compliance tests to your CI/CD

TChecks CI/CD tooling seamlessly integrates app compliance scanning into CI/CD pipelines via plugins for GitHub, Jenkins, and FastLane. You can also use Checks in any other CI/CD system that supports custom scripts, such as GitLab, TeamCity, Bitbucket, and more.

image showing logos of CI/CD systems that support custom scripts - FastLane, Jenkins, GitHub, Atlassian BitBucket, GitLab, Azure DevOps, and Team City

When Checks scans an app, the binary undergoes dynamic and static analysis to understand your data collection and sharing practices, including app dependencies such as SDKs, permissions, and endpoints. This data is then tested against global regulatory requirements, store policies, your custom Checks policies, and your privacy policy to find potential issues and opportunities for improvement.


Top 5 benefits of integrating Checks into your CI/CD

image showing checks report highlighting potential issues

By adding Checks as a step in your CI/CD pipeline, you can automate app and code compliance scanning as part of the development lifecycle.

The top 5 benefits of integrating Checks in your CI/CD are:

  1. Real-time, intelligent alerting: You can stay informed of new compliance issues or changes in data behavior across your product portfolio with instant notifications via email or Slack. 
  2. Understand data sharing & SDKs: Checks can help ensure secure third-party data sharing by gaining visibility into SDK integrations, permissions, and data flow analysis. By using Checks, you can be confident in your third-party dependencies before your public release. 
  3. Ensure new builds follow your company policies: Checks enables you to automate data governance with custom policies that let you set up safeguards against specific endpoints, SDKs, data types, and permissions, tailoring privacy to your specific needs. These policies help ensure all new releases comply with your company’s data policies. 
  4. Keep your Google Play Data safety section up-to-date: Checks can recommend Google Play Data safety section disclosures and alert you if you should make an update before releasing publicly, ensuring your declarations are always up-to-date. 
  5. Deploy quickly and with confidence: When Checks finds issues in the CI/CD, these vulnerabilities are caught and remedied early, significantly reducing the risk of compliance violations once you deploy the app. Checks helps you maintain high compliance standards without slowing down the release cycle, enabling teams to deploy with confidence and ensuring that user data is protected from the outset.

Next steps

Getting started is simple. Start by first signing up for Checks and then adding Checks to your CI/CD pipelines with these simple configuration steps. Once configured, Checks is ready to perform a variety of privacy and compliance verifications.

This proactive approach to privacy and compliance safeguards against potential risks and aligns with regulatory compliance requirements, making it an invaluable asset for any compliance and development team.

Battling Impersonation Scams: Monzo’s Innovative Approach

Posted by Todd Burner – Developer Relations Engineer

Cybercriminals continue to invest in advanced financial fraud scams, costing consumers more than $1 trillion in losses. According to the 2023 Global State of Scams Report by the Global Anti-Scam Alliance, 78 percent of mobile users surveyed experienced at least one scam in the last year. Of those surveyed, 45 percent said they’re experiencing more scams in the last 12 months.

ALT TEXT

The Global Scam Report also found that phone calls are the top method to initiate a scam. Scammers frequently employ social engineering tactics to deceive mobile users.

The key place these scammers want individuals to take action are in the tools that give access to their money. This means financial services are frequently targeted. As cybercriminals push forward with more scams, and their reach extends globally, it’s important to innovate in the response.

One such innovator is Monzo, who have been able to tackle scam calls through a unique impersonation detection feature in their app.

Monzo’s Innovative Approach

Founded in 2015, Monzo is the largest digital bank in the UK with presence in the US as well. Their mission is to make money work for everyone with an ambition to become the one app customers turn to to manage their entire financial lives.

Monzo logo

Impersonation fraud is an issue that the entire industry is grappling with and Monzo decided to take action and introduce an industry-first tool. An impersonation scam is a very common social engineering tactic when a criminal pretends to be someone else so they can encourage you to send them money. These scams often involve using urgent pretenses that involve a risk to a user’s finances or an opportunity for quick wealth. With this pressure, fraudsters convince users to disable security safeguards and ignore proactive warnings for potential malware, scams, and phishing.

Call Status Feature

Android offers multiple layers of spam and phishing protection for users including call ID and spam protection in the Phone by Google app. Monzo’s team wanted to enhance that protection by leveraging their in-house telephone systems. By integrating with their mobile application infrastructure they could help their customers confirm in real time when they’re actually talking to a member of Monzo’s customer support team in a privacy preserving way.

If someone calls a Monzo customer stating they are from the bank, their users can go into the app to verify this. In the Monzo app’s Privacy & Security section, users can see the ‘Monzo Call Status’, letting them know if there is an active call ongoing with an actual Monzo team member.

“We’ve built this industry-first feature using our world-class tech to provide an additional layer of comfort and security. Our hope is that this could stop instances of impersonation scams for Monzo customers from happening in the first place and impacting customers.” 

- Priyesh Patel, Senior Staff Engineer, Monzo’s Security team

Keeping Customers Informed

If a user is not talking to a member of Monzo’s customer support team they will see that as well as some helpful information. If the ‘Monzo call status’ is showing that you are not speaking to Monzo, the call status feature tells you to hang up right away and report it to their team. Their customers can start a scam report directly from the call status feature in the app.

screen grab of Monzo call status alerting the customer that the call the customer is receiving is not coming from Monzo. The customer is being advised to end the call

If a genuine call is ongoing the customer will see the information.

screen grab of Monzo call status confirming to the customer that the call the customer is receiving is coming from Monzo.

How does it work?

Monzo has integrated a few systems together to help inform their customers. A cross functional team was put together to build a solution.

Monzo’s in-house technology stack meant that the systems that power their app and customer service phone calls can easily communicate with one another. This allowed them to link the two and share details of customer service calls with their app, accurately and in real-time.

The team then worked to identify edge cases, like when the user is offline. In this situation Monzo recommends that customers don’t speak to anyone claiming they’re from Monzo until you’re connected to the internet again and can check the call status within the app.

screen grab of Monzo call status displaying warning while the customer is offline letting the customer know the app is unable to verify whether or not the call is coming from Monzo, so it is safer not to answer.

Results and Next Steps

The feature has proven highly effective in safeguarding customers, and received universal praise from industry experts and consumer champions.

“Since we launched Call Status, we receive an average of around 700 reports of suspected fraud from our customers through the feature per month. Now that it’s live and helping protect customers, we’re always looking for ways to improve Call Status - like making it more visible and easier to find if you’re on a call and you want to quickly check that who you’re speaking to is who they say they are.” 

- Priyesh Patel, Senior Staff Engineer, Monzo’s Security team

Final Advice

Monzo continues to invest and innovate in fraud prevention. The call status feature brings together both technological innovation and customer education to achieve its success, and gives their customers a way to catch scammers in action.

A layered security approach is a great way to protect users. Android and Google Play provide layers like app sandboxing, Google Play Protect, and privacy preserving permissions, and Monzo has built an additional one in a privacy-preserving way.

To learn more about Android and Play’s protections and to further protect your app check out these resources:

Enhanced screen sharing capabilities in Android 14 (and Google Meet) improve meeting productivity

Posted by Francesco Romano – Developer Relations Engineer on Android

App screen sharing improves privacy and productivity

Android 14 QPR2 brings exciting advancements in user privacy and streamlined multitasking with app screen sharing. No longer do users have to broadcast their entire screen while screen sharing or casting, ensuring they share exactly what they want to share.

Leverage the new MediaProjection APIs to customize the screen sharing experience and deliver even greater utility to your users.

What is app screen sharing?

Prior to Android 14, users could only share or record their entire screen on Android devices, which could expose private information in other apps or notifications.

App screen sharing is a new platform feature that lets users restrict sharing and recording to a single app window, mitigating the risk of oversharing private messages or notifications. With app screen sharing, the status bar, navigation bar, notifications, and other system UI elements are excluded from the shared display. Only the content of the selected app is shared.

This not only enhances security for screen sharing, but also enables new use cases on large screens. Users can improve multitasking productivity – such as screen sharing while attending a meeting – by taking advantage of extra screen space on these larger devices.

How does it work?

There are three different entry points for users to start app screen sharing:

    1. Start casting from Quick Settings
    2. Start screen recording from Quick Settings
    3. Launch from an app with screen sharing or recording capabilities via the MediaProjection API

Let’s consider an example where a host user wants to share a single app to the participants of a video call.

The host user starts screen sharing as usual, but now in Android 14 they are presented with an updated dialog that allows them to choose whether to share a single app instead of their entire screen.

The host user decides to share a single app, and they select the app from the App Selector.

During screen sharing, the video call participants can see only the content from the selected app.

The host user can end the screen capture in a few ways: from the app where sharing started, in the notification shade, by closing the app being shared, or by ending the video call.

visual journey of host sharing a single app to the participants in a video call across four panels

How to support app screen sharing?

Apps that use the MediaProjection APIs are capable of starting app screen sharing without any code changes. However, it’s important to test your app to ensure that the screen sharing experience works as intended, since the user flow changes with this new behavior. Previously, the user would stay in the host app after the permission dialog. With app screen sharing the user is not returned to the host app, but the target app to be shared is launched instead. If the target app was already running in foreground (e.g. in multi window mode), then it simply becomes the top focused app.

Android 14 also introduces two callback methods to empower you to customize the sharing experience:

MediaProjection.Callback#onCapturedContentResize(width, height) is invoked immediately after capture begins or when the size of the captured region changes. The method arguments provide the accurate sizing for the streamed capture.

Note: The given width and height correspond to the same width and height that would be returned from android.view.WindowMetrics#getBounds() of the captured region.

If the recorded content has a different aspect ratio from either the VirtualDisplay or output Surface, the captured stream has black bars around the recorded content. The application can avoid the black bars around the recorded content by updating the size of both the VirtualDisplay and output Surface:

override fun onCapturedContentResize(width: Int, height: Int): String {
    // VirtualDisplay instance from MediaProjection#createVirtualDisplay().
    virtualDisplay.resize(width, height, dpi)

    // Create a new Surface with the updated size.
    val textureName: Int // the OpenGL texture object name
    val surfaceTexture = SurfaceTexture(textureName)
    surfaceTexture.setDefaultBufferSize(width, height)
    val surface = Surface(surfaceTexture)

    // Ensure the VirtualDisplay has the updated Surface to send the capture to.
    virtualDisplay.setSurface(surface)
}

The other API is MediaProjection.Callback#onCapturedContentVisibilityChanged(isVisible), which is invoked after capture begins or when the visibility of the captured region changes. The method argument indicates the current visibility of the captured region.

The callback is triggered when:

    • The captured region becomes invisible (isVisible==False).This may happen when the projected app is not topmost anymore, like when another app entirely covers it, or the user navigates away from the captured app.
    • The captured region becomes visible again (isVisible==True).This may happen if the user moves the covering app to show at least some portion of the captured app (for example, the user has multiple apps visible in multi-window mode).

Applications can take advantage of this callback by showing or hiding the captured content from the output Surface based on whether the captured region is currently visible to the user. You should pause or resume the sharing accordingly in order to conserve resources.

How Google Meet is improving meeting productivity

“App screen sharing enables users to share specific information in a Meet call without oversharing private information on the screen like messages and notifications. Users can choose specific apps to share, or they can share the whole screen as before. Additionally, users can leverage split-screen mode on large screen devices to share content while still seeing the faces of friends, families, coworkers, and other meeting participants.” - Product Manager at Google Meet

Let’s see app screen sharing in action during a video call, in this coming-soon version of Google Meet!

moving image of app screen sharing in action during a video call on Google Meet

Window on the world

App screen sharing opens doors (and windows) for more focused and secure app experiences within the Android ecosystem.

This new feature enhances several use cases:

    • Collaboration apps can facilitate focused discussion on specific design elements, documents, or spreadsheets without including distracting background details.
    • Tech support agents can remotely view the user's problem app without seeing potentially sensitive content in other areas.
    • Video conferencing tools can share a presentation window selectively rather than the entire screen.
    • Educational apps can demonstrate functionality without compromising student privacy, and students can share projects without fear of showing sensitive information.

By thoughtfully implementing app screen sharing, you can establish your app as a champion of user privacy and convenience.

Introducing the Fused Orientation Provider API: Consistent device orientation for all

Posted by Geoffrey Boullanger – Senior Software Engineer, Shandor Dektor – Sensors Algorithms Engineer, Martin Frassl and Benjamin Joseph – Technical Leads and Managers

Device orientation, or attitude, is used as an input signal for many use cases: virtual or augmented reality, gesture detection, or compass and navigation – any time the app needs the orientation of a device in relation to its surroundings. We’ve heard from developers that orientation is challenging to get right, with frequent user complaints when orientation is incorrect. A maps app should show the correct direction to walk towards when a user is navigating to an exciting restaurant in a foreign city!

The Fused Orientation Provider (FOP) is a new API in Google Play services that provides quality and consistent device orientation by fusing signals from accelerometer, gyroscope and magnetometer.

Although currently the Android Rotation Vector already provides device orientation (and will continue to do so), the new FOP provides more consistent behavior and high performance across devices. We designed the FOP API to be similar to the Rotation Vector to make the transition as easy as possible for developers.

In particular, the Fused Orientation Provider

    • Provides a unified implementation across devices: an API in Google Play services means that there is no implementation variance across different manufacturers. Algorithm updates can be rolled out quickly and independent of Android platform updates;
    • Directly incorporates local magnetic declination, if available;
    • Compensates for lower quality sensors and OEM implementations (e.g., gyro bias, sensor timing).

In certain cases, the FOP returns values piped through from the AOSP Rotation Vector, adapted to incorporate magnetic declination.

How to use the FOP API

Device orientation updates can be requested by creating and sending a DeviceOrientationRequest object, which defines some specifics of the request like the update period.

The FOP then outputs a stream of the device’s orientation estimates as quaternions. The orientation is referenced to geographic north. In cases where the local magnetic declination is not known (e.g., location is not available), the orientation will be relative to magnetic north.

In addition, the FOP provides the device’s heading and accuracy, which are derived from the orientation estimate. This is the same heading that is shown in Google Maps, which uses the FOP as well. We recently added changes to better cope with magnetic disturbances, to improve the reliability of the cone for Google Maps and FOP clients.

The update rate can be set by requesting a specific update period. The FOP does not guarantee a minimum or maximum update rate. For example, the update rate can be faster than requested if another app has a faster parallel request, or it can be slower as requested if the device doesn’t support the high rate.

For full specification of the API, please consult the API documentation:

Example usage (Kotlin)

package ...

import android.content.Context
import com.google.android.gms.location.DeviceOrientation
import com.google.android.gms.location.DeviceOrientationListener
import com.google.android.gms.location.DeviceOrientationRequest
import com.google.android.gms.location.FusedOrientationProviderClient
import com.google.android.gms.location.LocationServices
import com.google.common.flogger.FluentLogger
import java.util.concurrent.Executors

class Example(context: Context) {
  private val logger: FluentLogger = FluentLogger.forEnclosingClass()

  // Get the FOP API client
  private val fusedOrientationProviderClient: FusedOrientationProviderClient =
    LocationServices.getFusedOrientationProviderClient(context)

  // Create an FOP listener
  private val listener: DeviceOrientationListener =
    DeviceOrientationListener { orientation: DeviceOrientation ->
      // Use the orientation object returned by the FOP, e.g.
      logger.atFinest().log("Device Orientation: %s deg", orientation.headingDegrees)
    }

  fun start() {
    // Create an FOP request
    val request =
      DeviceOrientationRequest.Builder(DeviceOrientationRequest.OUTPUT_PERIOD_DEFAULT).build()

    // Create (or re-use) an Executor or Looper, e.g.
    val executor = Executors.newSingleThreadExecutor()

    // Register the request and listener
    fusedOrientationProviderClient
      .requestOrientationUpdates(request, executor, listener)
      .addOnSuccessListener { logger.atInfo().log("FOP: Registration Success") }
      .addOnFailureListener { e: Exception? ->
        logger.atSevere().withCause(e).log("FOP: Registration Failure")
      }
  }

  fun stop() {
    // Unregister the listener
    fusedOrientationProviderClient.removeOrientationUpdates(listener)
  }
}

Technical background

The Android ecosystem has a wide variety of system implementations for sensors. Devices should meet the criteria in the Android compatibility definition document (CDD) and must have an accelerometer, gyroscope, and magnetometer available to use the fused orientation provider. It is preferable that the device vendor implements the high fidelity sensor portion of the CDD.

Even though Android devices adhere to the Android CDD, recommended sensor specifications are not tight enough to fully prevent orientation inaccuracies. Examples of this include magnetometer interference from internal sources, and delayed, inaccurate or nonuniform sensor sampling. Furthermore, the environment around the device usually includes materials that distort the geomagnetic field, and user behavior can vary widely. To deal with this, the FOP performs a number of tasks in order to provide a robust and accurate orientation:

    • Synchronize sensors running on different clocks and delays;
    • Compensate for the hard iron offset (magnetometer bias);
    • Fuse accelerometer, gyroscope, and magnetometer measurements to determine the orientation of the device in the world;
    • Compensate for gyro drift (gyro bias) while moving;
    • Produce a realistic estimate of the compass heading accuracy.

We have validated our algorithms on comprehensive test data to provide a high quality result on a wide variety of devices.

Availability and limitations

The Fused Orientation Provider is available on all devices running Google Play services on Android 5 (Lollipop) and above. Developers need to add the dependency play-services-location:21.2.0 (or above) to access the new API.

Permissions

No permissions are required to use the FOP API. The output rate is limited to 200Hz on devices running API level 31 (Android S) or higher, unless the android.permissions.HIGH_SAMPLING_RATE_SENSORS permission was added to your Manifest.xml.

Power consideration

Always request the longest update period (lowest frequency) that is sufficient for your use case. While more frequent FOP updates can be required for high precision tasks (for example Augmented Reality), it comes with a power cost. If you do not know which update period to use, we recommend starting with DeviceOrientationRequest::OUTPUT_PERIOD_DEFAULT as it fits most client needs.

Foreground behavior

FOP updates are only available to apps running in the foreground.


Copyright 2023 Google LLC.
SPDX-License-Identifier: Apache-2.0

Designing your account deletion experience with users in mind

Posted by Tatiana van Maaren – Global T&S Partnerships Lead, Privacy & Security, May Smith - Product Manager, and Anita Issagholyan – Policy Lead

With millions of developers relying on our platform, Google Play is committed to keeping our ecosystem safe for everyone. That’s why, in addition to our ongoing investments in app privacy and security, we also continuously update our policies to respond to new challenges and user expectations.

For example, we recently introduced a new account deletion policy with required disclosures within the Data Safety section on the Play Store. Deleting an account should be as easy as creating one, so the new policy requires developers to provide information and web resources that help users to manage their data and understand an app's deletion practices.

To help you build trust and design a user-friendly experience that helps meet our policy requirements, consider these 5 best practices when implementing your account deletion solution.

1.     Make it seamless

Users prefer a simple and straightforward account deletion flow. Although users know that more steps may follow (such as authentication) navigating multiple screens before the deletion page can be a significant barrier and create negative feelings for the user. Consider providing your account deletion option on an account settings page or place a prominent button on the home screen. Design the flow with discoverability in mind by taking the user directly to the deletion process.

2.     Allow automatic deletion

Users feel that if they can create an account without talking to a customer service agent, they should be able to delete their account online, too. If automation is not on your roadmap just yet, consider a step-by-step deletion request form or a dedicated page to connect users with customer support.

3.     Offer guidance and explain potential implications

Users delete their accounts for various reasons, some of which may be better resolved another way. Early in your deletion flow, point your users toward a Help Center article that explains how your deletion process works in simple terms, including any potential consequences. For example, make it clear if your users will need to pause their payment method before deleting the account, or download any account data they want to keep. Helping your users understand the process in advance can prevent accidental deletions. For those who do change their minds, consider offering a way to recover their accounts within a reasonable timeframe.

Here’s an example of how Play Store Developer, Canva, has designed the in-app deletion flow to explain potential consequences of account deletion:

user journey on the Canva app in three panels
User journey on the Canva app
“User data privacy has always been important for us. We’ve always been intentional about our approach in optimizing the Canva app so our users can have more transparency and control over their data. We’re welcoming these new requirements from the Play store as we know this new flow will elevate users’ trust in our app and benefit our business in the long term.” - Will Currie, Software Engineer, Canva

4.     Confirm account deletion

Sometimes users misunderstand whether the account itself or just data collected by the app was deleted in the deletion process. Users often think that the data your app stored in the cloud will automatically be deleted at the same time as account deletion. Since it may take time to remove account data from internal company systems or comply with data retention requirements in different regions, transparency about the process can help you maintain trust in your brand and make it more likely for users to return in the future.

Here’s SYBO Games, has designed their web deletion in-app deletion flow:

user journey on the Sybo Games web resource in four panels
User journey on the SYBO Games web resource
“We are always striving to ensure that our games provide a fun user experience, built on a solid data protection foundation. When we learned about the new account deletion update on Google Play, we thought this was a great step forward to ensure that the entire developer ecosystem optimizes for user safety. We encourage developers to carve out time to embrace these improvements and prioritize regular privacy check-ins.”  - Elizabeth Ann Schweitzer, Games Compliance Manager, SYBO Games

5.     Don’t forget user engagement

This is a great opportunity to connect with your users at a critical moment. Make sure users who have uninstalled your app can easily remove their accounts through a web resource without needing to reinstall the app. You can also invite them to complete a survey or provide feedback on their decision.

Protecting users' data is essential for building trust and loyalty. By updating the Data Safety section on Google Play and continuing to optimize user experience for account deletion, you can strengthen trust in your company while striving for the highest level of user data protection.


Thank you for your continued collaboration and feedback in developing this data transparency feature and in helping make Google Play safe for all.

Building Open Models Responsibly in the Gemini Era

Google has long believed that open technology is not only good for our company, but good for the industry, consumers, and the world. We’ve released open-source projects like Android and Chromium that transformed access to mobile and web technologies, and have done the same in AI with Transformers, TensorFlow, and AlphaFold. The release of our Gemma family of open models is a next step in how we’re deepening our commitment to open technology alongside an industry-leading safe, responsible approach. At the same time, the rapidly evolving nature of AI raises important considerations for how to enable safety-aligned open models: an approach that supports broad innovation while promoting safe uses.

A benefit of open source is that once it is released, its license gives users full creative autonomy. This is a powerful guarantee of technology access for developers and end users. Another benefit is that open-source technology can be modified to fit the unique use case of the end user, without restriction.

In the hands of a malicious actor, however, the lack of restrictions can raise risks. Computing has been through similar cycles before, addressing issues such as protecting users of the open internet, handling cryptography, and addressing open-source software security. We now face this challenge with AI. Below we share the approach we took to openly releasing Gemma models, and the advancements in open model safety we hope to accelerate.


Providing access to Gemma open models

Today, Gemma models are being released as what the industry collectively has begun to refer to as “open models.” Open models feature free access to the model weights, but terms of use, redistribution, and variant ownership vary according to a model’s specific terms of use, which may not be based on an open-source license. The Gemma models’ terms of use make them freely available for individual developers, researchers, and commercial users for access and redistribution. Users are also free to create and publish model variants. In using Gemma models, developers agree to avoid harmful uses, reflecting our commitment to developing AI responsibly while increasing access to this technology.

We’re precise about the language we’re using to describe Gemma models because we’re proud to enable responsible AI access and innovation, and we’re equally proud supporters of open source. The definition of "Open Source" has been invaluable to computing and innovation because of requirements for redistribution and derived works, and against discrimination. These requirements enable cross-industry collaboration, individual innovation and entrepreneurship, and shared research to happen with exponential effects.

However, existing open-source concepts can’t always be directly applied to AI systems, which raises questions on how to use open-source licenses with AI. It’s important that we carry forward open principles that have made the sea-change we’re experiencing with AI possible while clarifying the concept of open-source AI and addressing concepts like derived work and author attribution.


Taking a comprehensive approach to releasing Gemma safely and responsibly

Licensing and terms of use are only one part of the evaluations, technical tools, and considered decision-making that went into aligning this release with our responsible AI Principles. Our approach involved:

  • Systematic internal review in accordance with our AI Principles: Consistent with our AI Principles, we release models only when we have determined the benefits are significant, and the risks of misuse are low or can be mitigated. We take that same approach to open models, incorporating a balance of the benefits of wider access to a particular model as well as the risks of misuse and how we can mitigate them. With Gemma, we considered the increased AI research and innovation by us and many others in the community, the access to AI technology the models could bring, and what access was needed to support these use cases.
  • A high evaluation bar: Gemma models underwent thorough evaluations, and were held to a higher bar for evaluating risk of abuse or harm than our proprietary models, given the more limited mitigations currently available for open models. These evaluations cover a broad range of responsible AI areas, including safety, fairness, privacy, societal risk, as well as capabilities such as chemical, biological, radiological, nuclear (CBRN) risks, cybersecurity, and autonomous replication. As described in our technical report, the Gemma models exhibit state-of-the-art safety performance in human side-by-side evaluations.
  • Responsibility tools for developers: As we release the Gemma models, we are also releasing a Responsible Generative AI Toolkit for developers, providing guidance and tools to help them create safer AI applications.

We continue to evolve our approach. As we build these frameworks further, we will proceed thoughtfully and incorporate what we learn into future model assessments. We will continue to explore the full range of access mechanisms, with benefits and risk mitigation in mind, including API-based access and staged releases.


Advancing open model safety together

Many of today’s AI safety tools are designed for systems where the design approach assumes restricted access and redistribution, as well as auxiliary controls like query filters. Similarly, much of the AI safety research for improving mitigations takes on the design assumptions of those systems. Just as we have created unique threat models and solutions for other open technology, we are developing safety and security tools appropriate for the differences of openly available AI.

As models become more and more capable, we are conducting research and investing in rigorous safety evaluation, testing, and mitigations for open models. We are also actively participating in conversations with policymakers and open-source community leaders on how the industry should approach this technology. This challenge is multifaceted, just like AI systems themselves. Model-sharing platforms like Hugging Face and Kaggle, where developers inspire each other with novel model iterations, play a critical role in efforts to develop open models safely; there is also a role for the cybersecurity community to contribute learnings and best practices.

Building those solutions requires access to open models, sharing innovations and improvements. We believe sharing the Gemma models will not just help increase access to AI technology, but also help the industry develop new approaches to safety and responsibility.

As developers adopt Gemma models and other safety-aligned open models, we look forward to working with the open-source community to develop more solutions for responsible approaches to AI in the open ecosystem. A global diversity of experiences, perspectives, and opportunities will help build safe and responsible AI that works for everyone.

By Anne Bertucio – Sr Program Manager, Open Source Programs Office; Helen King – Sr Director of Responsibility, Google DeepMind

YouTube releases scripts to help partners and creators to optimize their work

At YouTube Technology Services, we believe that open source software is essential for driving innovation and collaboration in the YouTube ecosystem. We want to make automation on YouTube more accessible by providing publicly available scripts to automate common use cases, aiming to decrease the cost for partners and creators to handle the most common scenarios when managing their content on YouTube.

In order to do so, we are announcing a new GitHub Organization, YouTubeLabs, where you will find open source code examples in the code-samples repository. We are providing open source scripts for a variety of use cases, including but not limited to:

Most code samples rely on public YouTube APIs or Google APIs and are well-documented and well-commented, in order to be easily modified by partners and creators.

We are delivering code that aims to be as accessible as possible to our partners and creators, with minimal configurations and minimal installation required. That's why we rely on Colaboratory Notebooks (Colab) and AppsScript as the main pillars of our open source offering. Colab is a free, cloud-based Jupyter notebook environment that makes it easy to run Python code in the browser, and it is integrated with Google Drive. AppsScript is a serverless platform that allows you to write scripts that run on Google's servers.

We believe that open source software is key to the future of the YouTube ecosystem. By making our code available to the public, we are helping to empower partners and creators to do more with YouTube.

Want to get started? Check out some of the code examples already available in YouTubeLabs’ code-sharing repository:

We look forward to continuing to build out our open source examples in the coming months, so don’t forget to “like and subscribe” to our repository to stay tuned for more!

By Federico Villa and Haley Schafer – Partner Technology Managers on behalf of YouTube Technology Services