Posted by Diana Wong, Product Manager, Android Jetpack
This blog post is part of a weekly series for #11WeeksOfAndroid. For each of the #11WeeksOfAndroid, we’re diving into a key area so you don’t miss anything.This week, we spotlighted Jetpack; here’s a look at what you should know.
The big news
In 2018, we launched Android Jetpack as a suite of libraries to help developers follow best practices, reduce boilerplate code, and write code that works consistently across Android versions and devices. We are excited about the growth we’ve seen and the incredible feedback that developers like you have shared with us. 47% of the top 1000 apps use 2 or more Jetpack libraries, not including core libraries like AppCompat or Lifecycle. Our work over the past year has been about making the basics easy for Android developers, so that you can focus on the code you care about. We have released many updates to our existing libraries as well as new libraries to help make building high-quality apps easier.
What to watch
We have also been busy pushing out many updates over the past year!
For an overall look at what’s new in Jetpack, be sure to check out our talk from #Android11 Beta launch:
It’s a quick fly-by introducing many of the updates to our libraries, with pointers on how to get started.
This week, we’ve also done deep dives into major releases like Hilt, including cheat sheets to help you get started, and how we migrated our own samples to use Hilt for dependency injection. Less boilerplate = more fun.
Paging 3.0 is one of our first libraries written Kotlin-first and based on coroutines. The Paging library adds the features you asked for like better error handling, easier list transformations like map or filter, and support for common features like list separators, headers, and footers. We added RxJava, LiveData and ListenableFutures support and backwards compatibility with Paging 2, so it’s easier to migrate.
Using the Camera in your app? CameraX is in Beta and helps developers manage edge cases across different devices and OS versions, so that you don’t have to.
This year, we've made several major improvements with the release of Navigation 2.3, which allows you to navigate between different screens of your app with ease while also allowing you to follow Android UI principles. Let us navigate you through them all here:
Spotlight on Permissions
In Android 11, we continued our work to give users even more control over sensitive permissions. At the same time, it's very important to us that we make it as easy as possible for you as developers to build for Android. With the changes in privacy over the past several releases, Android Jetpack is making it easier for your app to work with Permissions. Now there are type-safe contracts for common intents and more via new ActivityResult APIs. These changes simplify how you request permissions, and we’ll continue to work on making permissions easier in the future. Find out more in this post.
Learning path
Take a look at our new Learning pathway for an easy way to go through all the highlights from this week. It’s an ordered tutorial which guides you through our new content, culminating in a quiz. Bonus: You earn a bright and shiny Jetpack badge to be saved to your Google Developer Profile. In addition to the learning pathway, we’ve also got a new library explorer to make it simple to find more about Jetpack libraries you might be looking for and their latest updates.
Key takeaways
Best practices are baked into Jetpack libraries, giving opinionated guidance to make it easier for you to build a higher-quality Android app. We’ve released new features to Navigation and Workmanager, updates to increase the stability of CameraX, added robustness for Biometrics, and more. We’ve also launched new libraries, like our collaboration with Dagger for Hilt and a new library to help improve app startup. Your feedback is important to us; so give these libraries a shot, tell us what you think, and help us improve them!
Resources
You can find the entire playlist of #11WeeksOfAndroid video content here, and learn more about each week here. We’ll continue to spotlight new areas each week, so keep an eye out and follow us on Twitter and YouTube. Thanks so much for letting us be a part of this experience with you!
This blog post is part of a weekly series for #11WeeksOfAndroid. Each week we’re diving into a key area of Android so you don’t miss anything. This week, we spotlighted languages; here’s a look at what you should know.
Modern Android development starts with outstanding language support. Together, Kotlin, the Java programming language, and C++ form the foundation for Android’s APIs and the tools you use every day for app development. This week we dove into all of the latest news across Android’s three core languages: from Kotlin coroutines to Android 11’s new Java APIs to better tools for native development, there’s a lot packed into the latest release.
Kotlin and coroutines
Kotlin is at the core of Android’s modern, opinionated APIs. We hear from Android developers around the world that they love Kotlin for how expressive it is, how it helps you write higher quality apps, and how easy it is to start using in your existing Java codebase. More than 70% of the top 1000 apps on the Play Store now use Kotlin, and SlashDataTM announced earlier this year that Kotlin has been the fastest growing language community in percentage terms over the past two years. With the Android 11 beta, we decided to further embrace Kotlin by officially recommending coroutines for asynchronous work on Android.
From Kotlin-first libraries in Android Jetpack to deep integration with the tools in Android Studio, Android is deeply committed to Kotlin — and there’s never been a better time to start using it. We’ve heard from many of you, though, that convincing your team to adopt Kotlin is not always easy. Even though Kotlin is 100% interoperable with the Java programming language, your teammates might have concerns. Is it worth spending the time learning a new language? How should you prioritize Kotlin against our other product and technology priorities?
This week we released a new case study from the Google Home team to help answer some of these questions. Over the course of one year, the Google Home team moved all new feature development to Kotlin and found their null pointer exceptions dropped by 33% during the same period. This is consistent with what we’ve heard from Android teams all over the world — from Duolingo to Zomato to Cash App — Kotlin is delivering value both in the form of productivity and higher app quality for teams large and small. For all our latest case studies and data on Kotlin, check out our new Kotlin case studies page.
For beginners, we announced the launch of our new Android Basics in Kotlin course. If you are just learning how to program, Android Basics teaches essential programming concepts like functions and variables and will take you from “Hello World” all the way up through building a whole collection of Android apps in Kotlin.
The Java programming language and C++
When we announced official support for Kotlin three years ago, we didn’t forget about the large number of Java and C++ Android developers. In the Android 11 release, we sought to keep improving our support for both of these languages. With the Android 11 beta, we upgraded our Java library support with a number of new APIs from OpenJDK 9, 10, and 11. We also unveiled Java library desugaring in Android Studio 4.0, making it easy to use many of these newer Java APIs even on older Android devices — for those of you who have asked for java.time support on older devices, we’ve heard you loud and clear, and it’s arrived. For all the latest information on how to make use of these newer APIs, check out Murat Yener’s talk Support for newer Java APIs. With Android 11, we also updated the Android runtime to make app startup even faster with I/O prefetching.
Finally, we continue to focus on improvements to the D8 and R8 compilers in Android Studio. Android Studio comes with built-in support for the R8 shrinker, which helps you keep your app’s memory footprint small, leading to higher installs and retention among your users. We also recently added support for shrinking Kotlin libraries and apps that use Kotlin reflection with R8. For more information, check out Mads Ager and Morten Krogh-Jespersen’s latest Medium post.
Resources
You can find the entire playlist of #11WeeksOfAndroid video content here, and learn more about each week here. We’ll continue to spotlight new areas each week, so keep an eye out and follow us on Twitter and YouTube. Thanks so much for letting us be a part of this experience with you!
Posted by Dirk Dougherty, Android Developer Relations This blog post is part of a weekly series for #11WeeksOfAndroid. Each week we’re diving into a key area of Android so you don’t miss anything. This week, we spotlighted Android 11 Compatibility; here’s a look at what you should know.
The big news
With Beta 2 now in the hands of users and developers, Android 11 is moving quickly toward the final release later in Q3. For developers, now is the time to make sure your apps are ready! With that in mind, this week we highlighted some resources that can help you get started with app compatibility testing and use some of the new tools in Android 11. Here’s a quick rundown of topics that we covered.
Platform stability
In Android 11 we added a new release milestone called Platform Stability to clearly signal to developers that all APIs and system behaviors are complete. This week, with Beta 2, Android 11 reached Platform Stability, so it’s a great time to do your final compatibility testing and updates. The Beta 2 and Platform Stability blog post goes into more detail on what this milestone means for developers, and you can also read about it in the Android 11 timeline.
App compatibility
As we talked about in our Beta 2 post this week, Android 11 compatibility means that your app is validated to run properly on Android 11 with all of the functionality and features that users expect. To get started, all you need is your app and a device or emulator running Android 11.
When making sure an app is compatible, the goal is to test your app and make the minimum changes to maintain your app’s functionality on Android 11, then publish the compatible version to users by the Android 11 final release. In most cases you should be able to do this without changing your targetSdkVersion or compiling against the new APIs.
It isn’t just for apps and games either - if you develop SDKs, libraries, tools, or even frameworks, now is the time to test those against Android 11 and release a compatible version. App and game developers using your products could be blocked until they can get your Android 11 compatible versions.
For more details on app compatibility, take a look at the migration guide and the list of behavior changes that could affect your apps.
Tools for testing your apps
We highlighted some new tools for you to use as you get started with compatibility testing. Our blog post “Testing app compatibility in Android 11” went into the details.
First is the compatibility framework, a new feature that helps with managing the platform changes that can affect apps. It provides standard metadata for changes, standard gating based on targetSdkVersion, and standard log output to help you identify a change affecting your app. You can toggle behavioral changes in a debuggable app, either through Developer options in Settings or through adb. This helps you isolate changes and test against them individually.
Isolating regressions across devices and API levels can be time-consuming and complex. Now with Android Studio 4.2, you can run instrumentation tests in parallel across multiple physical or virtual devices at once, then compare all of the results in a single Test Matrix. You can run tests on more devices in less time, and catch issues earlier.
Android Generic System Image (GSI) is a great way to expand your Android 11 testing across a broader set of devices and we’ve released an updated Android GSI codelab to help you get started. Through GSI, you can install a generic version of Android 11 on any unlocked, Treble-compliant device that shipped with Android 9 or higher. This includes not only Pixel devices, but many other popular devices in use across the global Android ecosystem.
App compatibility toggles in Developer options.
Ecosystem updates and app compatibility
In our “Accelerating Android updates” blog post, we looked at how we’re continuing to get the latest OS to reach critical mass by expanding Android’s updatability architecture. With technology like Project Treble and Google Play system updates, we can deliver updates across more devices faster, and increase consistency across the ecosystem.
The work we’ve been doing with Project Treble is making it dramatically faster and easier to bring up new versions of Android on new and existing devices. It also makes it possible for device-makers to run their own Developer Preview programs, in some cases in parallel with Android’s own ongoing development. These programs help device makers get their OS updates ready sooner and engage earlier with the Android developer community.
With Google Play system updates (Project Mainline) the goal is to directly update core OS components across devices in the Android ecosystem to improve security, privacy, and consistency across the ecosystem. In Android 11, we’ve added more updatable modules to standardize behaviors in key app-facing areas such as permissions, media, NNAPI, and others.
Other improvements include a Generic Kernel Image (GKI) and Virtual A/B, a new over-the-air update mechanism that combines the benefits of seamless updates with smaller storage requirements. We’re working closely with device makers to bring these to Android 11 devices.
Over time, these will help reduce your development and testing costs to make your app compatible across platform versions and devices.
Taking center stage
A common reason for unexpected app compatibility issues is apps and games depending on Android non-SDK interfaces. In Android 11 we're continuing our long-term effort to move apps to using public APIs instead.
This week we highlighted Excelliance Tech, who recently moved their LeBian SDK away from non-SDK interfaces, toward stable, official APIs. Their collaboration with the Android team also led to a new public API for resource loading that all developers can use - the ResourcesLoader API in Android 11.
Check out the Excelliance Tech story in this blog post.
The Excelliance Tech team.
What to watch
During Android 11 Compatibility week we posted three short videos to help you plan for compatibility and test your apps. View the playlist here.
The video below gives you a quick overview of Android’s annual release timeline and what the phases mean for developers.
Next, here’s a video that introduces the compatibility framework, a new testing and debugging feature for developers in Android 11. It shows what it is, why it is useful, and how to use it. You’ll walk through an example that shows how you’d enable a specific change, test your app with the change, and then look for the log output to help you identify the change that affected the app.
Last, this video takes you through a new feature in Android Studio that lets you run instrumentation tests in parallel on multiple devices. It shows you how to set up a device set, run tests on the devices, and then jump into the Text Matrix to compare and analyze results. It’s a great way to do your app compatibility testing in Android Studio.
Learning path
If you’re looking for an easy way to pick up the highlights of this week, check out the Compatibility pathway. A pathway is an ordered tutorial that allows users to complete a pre-defined module that culminates in a quiz. A badge is awarded to each user who passes the quiz and can be saved to your Google Developer Profile. Test your knowledge about Android 11 Compatibility to earn a limited edition Android 11 Compatibility badge.
Key takeaways
With each release, we’re working to reduce the impact of compatibility testing on your apps. In Android 11, we’ve added new processes, developer tools, and release milestones to make it easier. We hope the resources we provided this week are helpful as you get started with your compatibility testing. Here are this week’s key takeaways for developers:
Android 11 has reached Platform Stability and all app-facing APIs and behaviors are now complete.
App and game developers should start compatibility testing now and release updates by the Android 11 final release later this year.
SDK, library, and tool developers should complete testing and release compatible versions as soon as possible to avoid blocking downstream developers.
New tools and resources are available to help. See below for highlights and visit developer.android.com/11 for complete details.
This blogpost is a collaboration between Google and Viber. Authored by Kseniia Shumelchyk from Google and Anton Novikov, Sergey Kozlov from Viber.
As a messaging app, Viber needs to store, process and share a significant amount of data. Viber aims to give its users an easy, fast, reliable and secure communication platform by providing an intuitive interface and operating with files in a privacy-preserving way. We believe the modern scoped storage paradigm provides this foundation for app developers and users.
Scoped storage was introduced in Android 10 with further improvements in Android 11 to provide better protection to app and user data on a platform level. Due to Viber's complexity, the team opted to incrementally implement the changes that were required to comply with scoped storage.
In this article, we’ll share how Viber handled the migration to scoped storage, focusing on what they did to optimize working with media files and other data in the app.
Managing storage across Android versions
Android’s storage model has evolved to adapt to changing privacy considerations, leading to the changes in the storage system APIs. Let’s take a look at key platform changes that affected the legacy Viber implementation.
Media directories
Scoped storage changes the way that apps store and access files on a device's external storage. Viber needed to evaluate the differences between the existing app's storage model and updated platform guidelines, followed by gradual application changes to work with files in scoped storage. Therefore Viber invoked the requestLegacyExternalStorage flag to temporarily opt-out of scoped storage on Android 10 until the app was fully compatible.
In order to adjust their app experience to scoped storage, Viber now contributes public media files to well-defined media collections using the MediaStore API. This way, the files are accessible in a device gallery, and can be read by other apps with the storage permission. Private media files are stored in the app-specific directory on external storage and are accessed via the internal ContentProvider.
Storage permissions
The other notable update is related to changes in the storage permissions model: Apps in scoped storage have unrestricted access to their app-specific directories on external storage and can contribute to well-defined media collections without requesting a runtime permission. This change will help Viber provide more granular control to their users:
“This addition supports our efforts to provide our users with the best security and privacy solutions we can provide supported by the Android OS, users will benefit from this added security later without needing to opt-in. We also added a new ‘Save to gallery’ option allowing users to choose to make their photos readable by other apps or not. Because chats may contain private images or videos, it’s important to give users the ability to hide these files from the gallery. This change gives users additional control over the content included in their Viber messages.“ said Anton Novikov and Sergey Kozlov from Viber.
Accessing files outside of app-specific directory
Previously, Viber created and consumed files in a custom top level directory and depended on file path access. With scoped storage, saving app files to a top level directory became an anti-pattern, so Viber has followed best practices to update their implementation to store media files from the chats only in locations that are accessible in scoped storage.
However, to reduce the complexity of migration, Viber decided to keep their own top level directory for Android 10 and below, storing only the media files that are not exposed to the device’s Gallery app, while for Android 11 and above this directory is used in read-only mode to provide backward compatibility.
Another use case that Viber has been refining is sharing files in the chats. The updated storage runtime permission gives read access only to the images, videos and audio files that are available through MediaProvider. Starting from Android 11, the only way for Viber to access non-media files created by other apps is by using the Storage Access Framework document picker, which they had already utilized in a different part of their app.
App-specific files within external storage
In the scoped storage environment, app-specific directories on external storage are becoming private from other apps. This change has helped Viber leverage its use of external storage for storing private user files:
”We find change to app-specific directories to be useful, because it will help to ensure that personal chats are protected and backed with platform security.” said Anton Novikov from Viber. Learn more about how to access app-specific files.
Single interface to access storage
Because Viber targets a large audience running on Android 4.2 and above, they introduced an abstraction layer that aids them in managing storage access efficiently across all supported Android versions and with their use cases in mind.
Previously, Viber heavily used File API to access files, including files in legacy storage locations. Further, they stored absolute file paths for entries in the local database to keep the user's conversation history.
To standardize access to this conversation history and thus ensure that users don’t lose access to their files, Viber replaced absolute file paths with content URIs. In the new implementation, the app is accessing files only via content providers:
Internal FileProvider for Viber app-specific directories.
External file providers available in the Android framework, such as MediaStore or Storage Access Framework, or those belong to another app that shares files with Viber through Intent.ACTION_SEND.
By using a consistent ContentProvider layer, the ContentResolver gives the app a unified interface to access the file content.
This approach has also helped Viber optimize the network layer and define a universal Loader abstraction to upload/fetch and to read/store different types of media files like voice messages, chat images and stickers.
Summary
Android 11 further enhances scoped storage, which provides better protection of app and user data and makes the transition easier for developers. It’s amazing to see many apps like Viber are migrating to take advantage of scoped storage since Android 10.
We hope Viber’s story is useful and will inspire you to modernize your Android apps as well. Learn more about Android storage use cases and best practices.
This blog post is part of a weekly series for #11WeeksOfAndroid. For each #11WeeksOfAndroid, we’re diving into a key area so you don’t miss anything. This week, we spotlighted Privacy and Security; here’s a look at what you should know.
Privacy and security is core to how we design Android, and with every new release we increase our investment in this space. Android 11 continues to make important strides in these areas, and this week we’ll be sharing a series of updates and resources about Android privacy and security. But first, let’s take a quick look at some of the most important changes we’ve made in Android 11 to protect user privacy and make the platform more secure.
As shared in the “All things privacy in Android 11” video, we’re giving users even more control over sensitive permissions. Throughout the development of this release, we have engaged deeply and frequently with our developer community to design these features in a balanced way - amplifying user privacy while minimizing developer impact. Let’s go over some of these features:
One-time permission: In Android 10, we introduced a granular location permission that allows users to limit access to location only when an app is in use (aka foreground only). When presented with the new runtime permissions options, users choose foreground only location more than 50% of the time. This demonstrated to us that users really wanted finer controls for permissions. So in Android 11, we’ve introduced one time permissions that let users give an app access to the device microphone, camera, or location, just that one time. As an app developer, there are no changes that you need to make to your app for it to work with one time permissions, and the app can request permissions again the next time the app is used. Learn more about building privacy-friendly apps with these new changes in this video.
Background location: In Android 10 we added a background location usage reminder so users can see how apps are using this sensitive data on a regular basis. Users who interacted with the reminder either downgraded or denied the location permission over 75% of the time. In addition, we have done extensive research and believe that there are very few legitimate use cases for apps to require access to location in the background.
In Android 11, background location will no longer be a permission that a user can grant via a run time prompt and it will require a more deliberate action. If your app needs background location, the system will ensure that the app first asks for foreground location. The app can then broaden its access to background location through a separate permission request, which will cause the system to take the user to Settings in order to complete the permission grant.
In February, we announced that Google Play developers will need to get approval to access background location in their app to prevent misuse. We're giving developers more time to make changes and won't be enforcing the policy for existing apps until 2021. Check out this helpful video to find possible background location usage in your code.
Permissions auto-reset: Most users tend to download and install over 60 apps on their device but interact with only a third of these apps on a regular basis. If users haven’t used an app that targets Android 11 for an extended period of time, the system will “auto-reset” all of the granted runtime permissions associated with the app and notify the user. The app can request the permissions again the next time the app is used. If you have an app that has a legitimate need to retain permissions, you can prompt users to turn this feature OFF for your app in Settings.
Data access auditing APIs: Android encourages developers to limit their access to sensitive data, even if they have been granted permission to do so. In Android 11, developers will have access to new APIs that will give them more transparency into their app’s usage of private and protected data. The APIs will enable apps to track when the system records the app’s access to private user data.
Scoped Storage: In Android 10, we introduced scoped storage which provides a filtered view into external storage, giving access to app-specific files and media collections. This change protects user privacy by limiting broad access to shared storage in many ways including changing the storage permission to only give read access to photos, videos and music and improving app storage attribution. Since Android 10, we’ve incorporated developer feedback and made many improvements to help developers adopt scoped storage, including: updated permission UI to enhance user experience, direct file path access to media to improve compatibility with existing libraries, updated APIs for modifying media, Manage External Storage permission to enable select use cases that need broad files access, and protected external app directories. In Android 11, scoped storage will be mandatory for all apps that target API level 30. Learn more in this video and check out the developer documentation for further details.
Google Play system updates: Google Play system updates were introduced with Android 10 as part of Project Mainline. Their main benefit is to increase the modularity and granularity of platform subsystems within Android so we can update core OS components without needing a full OTA update from your phone manufacturer. Earlier this year, thanks to Project Mainline, we were able to quickly fix a critical vulnerability in the media decoding subsystem. Android 11 adds new modules, and maintains the security properties of existing ones. For example, Conscrypt, which provides cryptographic primitives, maintained its FIPS validation in Android 11 as well.
BiometricPrompt API: Developers can now use the BiometricPrompt API to specify the biometric authenticator strength required by their app to unlock or access sensitive parts of the app. We are planning to add this to the Jetpack Biometric library to allow for backward compatibility and will share further updates on this work as it progresses.
Identity Credential API: This will unlock new use cases such as mobile drivers licences, National ID, and Digital ID. It’s being built by our security team to ensure this information is stored safely, using security hardware to secure and control access to the data, in a way that enhances user privacy as compared to traditional physical documents. We’re working with various government agencies and industry partners to make sure that Android 11 is ready for such digital-first identity experiences.
Thank you for your flexibility and feedback as we continue to build an increasingly more private and secure platform. You can learn about more features in the Android 11 Beta developer site. You can also learn about general best practices related to privacy and security.
Please follow Android Developers on Twitter and Youtube to catch helpful content and materials in this area all this week.
Resources
You can find the entire playlist of #11WeeksOfAndroid video content here, and learn more about each week here. We’ll continue to spotlight new areas each week, so keep an eye out and follow us on Twitter and YouTube. Thanks so much for letting us be a part of this experience with you!
This blog post is part of a weekly series for #11WeeksOfAndroid. For each #11WeeksOfAndroid, we’re diving into a key area so you don’t miss anything. This week, we spotlighted Privacy and Security; here’s a look at what you should know.
Privacy and security is core to how we design Android, and with every new release we increase our investment in this space. Android 11 continues to make important strides in these areas, and this week we’ll be sharing a series of updates and resources about Android privacy and security. But first, let’s take a quick look at some of the most important changes we’ve made in Android 11 to protect user privacy and make the platform more secure.
As shared in the “All things privacy in Android 11” video, we’re giving users even more control over sensitive permissions. Throughout the development of this release, we have engaged deeply and frequently with our developer community to design these features in a balanced way - amplifying user privacy while minimizing developer impact. Let’s go over some of these features:
One-time permission: In Android 10, we introduced a granular location permission that allows users to limit access to location only when an app is in use (aka foreground only). When presented with the new runtime permissions options, users choose foreground only location more than 50% of the time. This demonstrated to us that users really wanted finer controls for permissions. So in Android 11, we’ve introduced one time permissions that let users give an app access to the device microphone, camera, or location, just that one time. As an app developer, there are no changes that you need to make to your app for it to work with one time permissions, and the app can request permissions again the next time the app is used. Learn more about building privacy-friendly apps with these new changes in this video.
Background location: In Android 10 we added a background location usage reminder so users can see how apps are using this sensitive data on a regular basis. Users who interacted with the reminder either downgraded or denied the location permission over 75% of the time. In addition, we have done extensive research and believe that there are very few legitimate use cases for apps to require access to location in the background.
In Android 11, background location will no longer be a permission that a user can grant via a run time prompt and it will require a more deliberate action. If your app needs background location, the system will ensure that the app first asks for foreground location. The app can then broaden its access to background location through a separate permission request, which will cause the system to take the user to Settings in order to complete the permission grant.
In February, we announced that Google Play developers will need to get approval to access background location in their app to prevent misuse. We're giving developers more time to make changes and won't be enforcing the policy for existing apps until 2021. Check out this helpful video to find possible background location usage in your code.
Permissions auto-reset: Most users tend to download and install over 60 apps on their device but interact with only a third of these apps on a regular basis. If users haven’t used an app that targets Android 11 for an extended period of time, the system will “auto-reset” all of the granted runtime permissions associated with the app and notify the user. The app can request the permissions again the next time the app is used. If you have an app that has a legitimate need to retain permissions, you can prompt users to turn this feature OFF for your app in Settings.
Data access auditing APIs: Android encourages developers to limit their access to sensitive data, even if they have been granted permission to do so. In Android 11, developers will have access to new APIs that will give them more transparency into their app’s usage of private and protected data. The APIs will enable apps to track when the system records the app’s access to private user data.
Scoped Storage: In Android 10, we introduced scoped storage which provides a filtered view into external storage, giving access to app-specific files and media collections. This change protects user privacy by limiting broad access to shared storage in many ways including changing the storage permission to only give read access to photos, videos and music and improving app storage attribution. Since Android 10, we’ve incorporated developer feedback and made many improvements to help developers adopt scoped storage, including: updated permission UI to enhance user experience, direct file path access to media to improve compatibility with existing libraries, updated APIs for modifying media, Manage External Storage permission to enable select use cases that need broad files access, and protected external app directories. In Android 11, scoped storage will be mandatory for all apps that target API level 30. Learn more in this video and check out the developer documentation for further details.
Google Play system updates: Google Play system updates were introduced with Android 10 as part of Project Mainline. Their main benefit is to increase the modularity and granularity of platform subsystems within Android so we can update core OS components without needing a full OTA update from your phone manufacturer. Earlier this year, thanks to Project Mainline, we were able to quickly fix a critical vulnerability in the media decoding subsystem. Android 11 adds new modules, and maintains the security properties of existing ones. For example, Conscrypt, which provides cryptographic primitives, maintained its FIPS validation in Android 11 as well.
BiometricPrompt API: Developers can now use the BiometricPrompt API to specify the biometric authenticator strength required by their app to unlock or access sensitive parts of the app. We are planning to add this to the Jetpack Biometric library to allow for backward compatibility and will share further updates on this work as it progresses.
Identity Credential API: This will unlock new use cases such as mobile drivers licences, National ID, and Digital ID. It’s being built by our security team to ensure this information is stored safely, using security hardware to secure and control access to the data, in a way that enhances user privacy as compared to traditional physical documents. We’re working with various government agencies and industry partners to make sure that Android 11 is ready for such digital-first identity experiences.
Thank you for your flexibility and feedback as we continue to build an increasingly more private and secure platform. You can learn about more features in the Android 11 Beta developer site. You can also learn about general best practices related to privacy and security.
Please follow Android Developers on Twitter and Youtube to catch helpful content and materials in this area all this week.
Resources
You can find the entire playlist of #11WeeksOfAndroid video content here, and learn more about each week here. We’ll continue to spotlight new areas each week, so keep an eye out and follow us on Twitter and YouTube. Thanks so much for letting us be a part of this experience with you!
This blog post is part of a weekly series for #11WeeksOfAndroid. Each week we’re diving into a key area of Android so you don’t miss anything. Throughout this week, we covered various aspects of Android on-device machine learning (ML). Whichever stage of development be it starting out or an established app; whatever role you play in design, product and engineering; whatever your skill level from beginner to experts, we have a wide range of ML tools for you.
Design - ML as a differentiator
“Focus on the user and all else will follow” is a Google mantra that becomes even more relevant in our machine learning age. Our Design Advocate, Di Dang, highlighted the importance of finding the unique intersection of user problems and ML strengths. Too often, teams are so keen on the idea of machine learning that they lose sight of their user needs.
Di outlined how the People + AI Guidebook can help you make ML product decisions and used the example of the Read Along app to illustrate topics like precision and recall, which are unique to ML design and development. Check out her interview with the Read Along team together with your team for more inspiration.
New ML Kit fully focused on on-device
When you decide that on-device machine learning is the solution, the easiest way to implement it will be through turnkey SDKs like ML Kit. Sophisticated Google-trained models and processing pipelines are offered through an easy to use interface in Kotlin / Java. ML Kit is designed and built for on-device ML: it works offline, offers enhanced privacy, unlocks high performance for real-time use cases and it is free. We recently made ML Kit a standalone SDK and it no longer requires a Firebase account. Just one line in your build.gradle file and you can start bringing ML functionality into your app.
The team has also added new functionalities such as Jetpack lifecycle support and the option to use the face contour models via Google Play Services saving as much as 20MB in app size. Another much anticipated addition is the support for swapping Google models with your own for both Image Labeling as well as Object Detection and Tracking. This provides one of the easiest ways to add TensorFlow Lite models to your applications without interacting with ByteArray!
Customise with TensorFlow Lite and Android tools
If the base model provided by ML Kit doesn’t quite fit the bill, what should developers do? The first port of call should be TensorFlow Hub where ready-to-use TensorFlow Lite models from both Google and the wider community can be downloaded. From 100,000 US Supermarket products to tomato plant diseases classifiers, the choice is yours.
From the examples of the Android Developer Challenge winners, it is obvious that on-device machine learning has come of age and ML functionalities once reserved for the cloud or supercomputers are now available on your Android phone. Take a step forward with us by trying out our codelabs of the day:
Android on-device machine learning is a rapidly evolving platform, if you have any enhancement requests or feedback on how it could be improved, please let us know together with your use-case (TensorFlow Lite / ML Kit). Time for on-device ML is now.
Resources
You can find the entire playlist of #11WeeksOfAndroid video content here, and learn more about each week here. We’ll continue to spotlight new areas each week, so keep an eye out and follow us on Twitter and YouTube. Thanks so much for letting us be a part of this experience with you!
Yesterday, we talked about turnkey machine learning (ML) solutions with ML Kit. But what if that doesn’t completely address your needs and you need to tweak it a little? Today, we will discuss how to find alternative models, and how to train and use custom ML models in your Android app.
Find alternative ML models
Crop disease models from the wider research community available on tfhub.dev
If the turnkey ML solutions don't suit your needs, TensorFlow Hub should be your first port of call. It is a repository of ML models from Google and the wider research community. The models on the site are ready for use in the cloud, in a web-browser or in an app on-device. For Android developers, the most exciting models are the TensorFlow Lite (TFLite) models that are optimized for mobile.
In addition to key vision models such as MobileNet and EfficientNet, the repository also boast models powered by the latest research such as:
Many of these solutions were previously only available in the cloud, as the models are too large and too power intensive to run on-device. Today, you can run them on Android on-device, offline and live.
Train your own custom model
Besides the large repository of base models, developers can also train their own models. Developer-friendly tools are available for many common use cases. In addition to Firebase’s AutoML Vision Edge, the TensorFlow team launched TensorFlow Lite Model Maker earlier this year to give developers more choices over the base model that support more use cases. TensorFlow Lite Model Maker currently supports two common ML tasks:
The TensorFlow Lite Model Maker can run on your own developer machine or in Google Colab online machine learning notebooks. Going forward, the team plans to improve the existing offerings and to add new use cases.
Using custom model in your Android app
New TFLite Model import screen in Android Studio 4.1 beta
Once you have selected a model or trained your model there are new easy-to-use tools to help you integrate them into your Android app without having to convert everything into ByteArrays. The first new tool is ML Model binding with Android Studio 4.1. This lets developers import any TFLite model, read the input / output signature of the model, and use it with just a few lines of code that calls the open source TensorFlow Lite Android Support Library.
Another way to implement a TensorFlow Lite model is via ML Kit. Starting in June, ML Kit no longer requires a Firebase project for on-device functionality. In addition, the image classification and object detection and tracking (ODT) APIs support custom models. The latter ODT offering is especially useful in use-cases where you need to separate out objects from a busy scene.
So how should you choose between these three solutions? If you are trying to detect a product on a busy supermarket shelf, ML Kit object detection and tracking can help your user select a specific product for processing. The API then performs image classification on just the part of the image that contains the product, which results in better detection performance. On the other hand, if the scene or the object you are trying to detect takes up most of the input image, for example, a landmark such as Big Ben, using ML Model binding or the ML Kit image classification API might be more appropriate.
TensorFlow Hub bird detection model with ML Kit Object Detection & Tracking AP
TensorFlow Hub and ML Kit Screencast: In this video, we first go through how Android developers can get the most out of TensorFlow Hub: how to find and download a model. Then we explain the steps to integrate it with ML Kit’s Object Detection and Tracking API.
Customizing your model is easier than ever
Finding, building and using custom models on Android has never been easier. As both Android and TensorFlow teams increase the coverage of machine learning use cases, please let us know how we can improve these tools for your use cases by filing an enhancement request with TensorFlow Lite or ML Kit.
Tomorrow, we will take a step back and focus on how to appropriately use and design for a machine learning first Android app. The content will be appropriate for the entire development team, so bring your product manager and designers along. See you next time.
Posted by Christiaan Prins, Product Manager, ML Kit and Shiyu Hu, Tech Lead Manager, ML Kit
Two years ago at I/O 2018 we introduced ML Kit, making it easier for mobile developers to integrate machine learning into your apps. Today, more than 25,000 applications on Android and iOS make use of ML Kit’s features. Now, we are introducing some changes that will make it even easier to use ML Kit. In addition, we have a new feature and a set of improvements we’d like to discuss.
A new ML Kit SDK, fully focused on on-device ML
ML Kit's APIs are built to help you tackle common challenges in the Vision and Natural Language domains. We make it easy to recognize text, scan barcodes, track and classify objects in real-time, do translation of text, and more.
The original version of ML Kit was tightly integrated with Firebase, and we heard from many of you that you wanted more flexibility when implementing it in your apps. As a result, we are now making all the on-device APIs available in a new standalone ML Kit SDK that no longer requires a Firebase project. You can still use both ML Kit and Firebase to get the best of both products if you choose to.
With this change, ML Kit is now fully focused on on-device machine learning, giving you access to the unique benefits that on-device versus cloud ML offers:
It’s fast,unlocking real-time use cases- since processing happens on the device, there is no network latency. This means, we can do inference on a stream of images / video or multiple times a second on text strings.
Works offline - you can rely on our APIs even when the network is spotty or your app’s end-user is in an area without connectivity.
Privacy is retained: since all processing is performed locally, there is no need to send sensitive user data over the network to a server.
Naturally, you still get access to Google’s on-device models and processing pipelines, all accessible through easy-to-use APIs, and offered at no cost.
All ML Kit resources can now be found on our new website where we made it a lot easier to access sample apps, API reference docs and our community channels that are there to help you if you have questions.
What does this mean if I already use ML Kit today?
If you are using ML Kit for Firebase’s on-device APIs in your app today, we recommend you to migrate to the new standalone ML Kit SDK to benefit from new features and updates. For more information and step-by-step instructions to update your app, please follow our Migration guide. The cloud-based APIs, model deployment and AutoML Vision Edge remain available through Firebase Machine Learning.
Shrink your app footprint with Google Play Services
Apart from making ML Kit easier to use, developers also asked if we can ship ML Kit through Google Play Services resulting in a smaller app footprint and the model can be reused between apps. Apart from Barcode scanning and Text recognition, we have now added Face detection / contour (model size: 20MB) to the list of APIs that support this functionality.
// Face detection / Face contour model
// Delivered via Google Play Services outside your app's APK…
implementation 'com.google.android.gms:play-services-mlkit-face-detection:16.0.0'
// …or bundled with your app's APK
implementation 'com.google.mlkit:face-detection:16.0.0'
Jetpack Lifecycle / CameraX support
Android Jetpack Lifecycle support has been added to all APIs. Developers can use addObserver to automatically manage teardown of ML Kit APIs as the app goes through screen rotation or closure by the user / system. This makes CameraX integration easier. With this release, we are also recommending that developers adopt CameraX in their apps due to the ease of integration and image quality improvements (compared to Camera1) on a wide range of devices.
// ML Kit now supports Lifecycle
val recognizer = TextRecognizer.newInstance()
lifecycle.addObserver(recognizer)
// ...
// Just like CameraX
val camera = cameraProvider.bindToLifecycle( /* lifecycleOwner= */this,
cameraSelector, previewUseCase, analysisUseCase)
For an overview of all recent changes, check out the release notes for the new SDK.
Codelab of the day - ML Kit x CameraX
To help you get started with the new ML Kit and its support for CameraX, we have created this code lab to Recognize, Identify Language and Translate text. If you have any questions regarding this code lab, please raise them at StackOverflow and tag it with [google-mlkit]. Our team will monitor this.
Early access program
Through our early access program, developers have an opportunity to partner with the ML Kit team and get access to upcoming features. Two new APIs are now available as part of this program:
Entity Extraction - Detect entities in text & make them actionable. We have support for phone numbers, addresses, payment numbers, tracking numbers, date/time and more.
Pose Detection - Low-latency pose detection supporting 33 skeletal points, including hands and feet tracking.
If you are interested, head over to our early access page for details.
Tomorrow - Support for custom models
ML Kit's turn-key solutions are built to help you take common challenges. However, if you needed to have a more tailored solution, one that required custom models, you typically needed to build an implementation from scratch. To help, we are now providing the option to swap out the default Google models with a custom TensorFlow Lite model. We’re starting with the Image Labeling and Object Detection and Tracking APIs, that now support custom image classification models.
Tomorrow, we will dive a bit deeper into how to find or train a TensorFlow Lite model and use it either with ML Kit, or with Android Studio’s new ML binding functionality.
Posted by Christiaan Prins, Product Manager, ML Kit and Shiyu Hu, Tech Lead Manager, ML Kit
Two years ago at I/O 2018 we introduced ML Kit, making it easier for mobile developers to integrate machine learning into your apps. Today, more than 25,000 applications on Android and iOS make use of ML Kit’s features. Now, we are introducing some changes that will make it even easier to use ML Kit. In addition, we have a new feature and a set of improvements we’d like to discuss.
A new ML Kit SDK, fully focused on on-device ML
ML Kit's APIs are built to help you tackle common challenges in the Vision and Natural Language domains. We make it easy to recognize text, scan barcodes, track and classify objects in real-time, do translation of text, and more.
The original version of ML Kit was tightly integrated with Firebase, and we heard from many of you that you wanted more flexibility when implementing it in your apps. As a result, we are now making all the on-device APIs available in a new standalone ML Kit SDK that no longer requires a Firebase project. You can still use both ML Kit and Firebase to get the best of both products if you choose to.
With this change, ML Kit is now fully focused on on-device machine learning, giving you access to the unique benefits that on-device versus cloud ML offers:
It’s fast,unlocking real-time use cases- since processing happens on the device, there is no network latency. This means, we can do inference on a stream of images / video or multiple times a second on text strings.
Works offline - you can rely on our APIs even when the network is spotty or your app’s end-user is in an area without connectivity.
Privacy is retained: since all processing is performed locally, there is no need to send sensitive user data over the network to a server.
Naturally, you still get access to Google’s on-device models and processing pipelines, all accessible through easy-to-use APIs, and offered at no cost.
All ML Kit resources can now be found on our new website where we made it a lot easier to access sample apps, API reference docs and our community channels that are there to help you if you have questions.
What does this mean if I already use ML Kit today?
If you are using ML Kit for Firebase’s on-device APIs in your app today, we recommend you to migrate to the new standalone ML Kit SDK to benefit from new features and updates. For more information and step-by-step instructions to update your app, please follow our Migration guide. The cloud-based APIs, model deployment and AutoML Vision Edge remain available through Firebase Machine Learning.
Shrink your app footprint with Google Play Services
Apart from making ML Kit easier to use, developers also asked if we can ship ML Kit through Google Play Services resulting in a smaller app footprint and the model can be reused between apps. Apart from Barcode scanning and Text recognition, we have now added Face detection / contour (model size: 20MB) to the list of APIs that support this functionality.
// Face detection / Face contour model
// Delivered via Google Play Services outside your app's APK…
implementation 'com.google.android.gms:play-services-mlkit-face-detection:16.0.0'
// …or bundled with your app's APK
implementation 'com.google.mlkit:face-detection:16.0.0'
Jetpack Lifecycle / CameraX support
Android Jetpack Lifecycle support has been added to all APIs. Developers can use addObserver to automatically manage teardown of ML Kit APIs as the app goes through screen rotation or closure by the user / system. This makes CameraX integration easier. With this release, we are also recommending that developers adopt CameraX in their apps due to the ease of integration and image quality improvements (compared to Camera1) on a wide range of devices.
// ML Kit now supports Lifecycle
val recognizer = TextRecognizer.newInstance()
lifecycle.addObserver(recognizer)
// ...
// Just like CameraX
val camera = cameraProvider.bindToLifecycle( /* lifecycleOwner= */this,
cameraSelector, previewUseCase, analysisUseCase)
For an overview of all recent changes, check out the release notes for the new SDK.
Codelab of the day - ML Kit x CameraX
To help you get started with the new ML Kit and its support for CameraX, we have created this code lab to Recognize, Identify Language and Translate text. If you have any questions regarding this code lab, please raise them at StackOverflow and tag it with [google-mlkit]. Our team will monitor this.
Early access program
Through our early access program, developers have an opportunity to partner with the ML Kit team and get access to upcoming features. Two new APIs are now available as part of this program:
Entity Extraction - Detect entities in text & make them actionable. We have support for phone numbers, addresses, payment numbers, tracking numbers, date/time and more.
Pose Detection - Low-latency pose detection supporting 33 skeletal points, including hands and feet tracking.
If you are interested, head over to our early access page for details.
Tomorrow - Support for custom models
ML Kit's turn-key solutions are built to help you take common challenges. However, if you needed to have a more tailored solution, one that required custom models, you typically needed to build an implementation from scratch. To help, we are now providing the option to swap out the default Google models with a custom TensorFlow Lite model. We’re starting with the Image Labeling and Object Detection and Tracking APIs, that now support custom image classification models.
Tomorrow, we will dive a bit deeper into how to find or train a TensorFlow Lite model and use it either with ML Kit, or with Android Studio’s new ML binding functionality.