Tag Archives: Android Developer

3 things to know about Kotlin from Android Dev Summit 2019

Last month’s #AndroidDevSummit was jam-packed with announcements and technical news...so much that we wouldn’t be surprised if you missed something. So all this month, we’ll be diving into key areas from throughout the summit so you don’t miss anything. First up, we’re spotlighting Kotlin, with the top things you should know:

#1: Kotlin momentum on Android

Kotlin is at the heart of modern Android development — and we’ve been excited to see how quickly it has won over developers around the world. At Android Dev Summit we announced that nearly 60% of the top 1000 Android apps on the Play Store now use Kotlin, and we’re seeing more developers adopt it every day. Kotlin has helpful features like null safety, data classes, coroutines, and complete interoperability with the Java programming language. We’re doubling down on Kotlin with more Kotlin-first APIs even beyond AndroidX — we just released KTX extensions, including coroutines support, for Play Core. There’s never been a better time to give Kotlin a try.

#2: Learn more: Getting started with Kotlin & diving into advanced Kotlin with coroutines

If you’re introducing Kotlin into an existing codebase, chances are that you’ll be calling the Java programming language from Kotlin and vice versa. At Android Dev Summit, developer advocates Murat Yener, Nicole Borrelli, and Wenbo Zhu took a look at how nullability, getters, setters, default parameters, exceptions, and more work across the two languages.

For those looking into more advanced Kotlin topics, we recommend watching Jose Alcérreca's and Yigit Boyar's talk that explains how coroutines and Flow can fit together with LiveData in your app's architecture and one on testing coroutines by Sean McQuillan and Manuel Vivo.

#3: Get certified in Kotlin

We announced the launch of our Associate Android Developer certification in Kotlin. Now you can prove your proficiency with modern Kotlin development on Android to your coworkers, your professional network, or even your future employer. As part of this launch, you can take this exam at a discount when using the code ADSCERT99 through January 25.

It’s especially great to hear from you, the Android community, at events like Android Dev Summit: what do you want to hear more about, and how can we help with something you’re working on. We asked you to submit your burning questions on Twitter and the livestream, and developer advocates Florina Muntenescu and Sean McQuillan answered your Kotlin and coroutines questions live during our #AskAndroid segment:

You can find the entire playlist of Android Dev Summit sessions and videos here. We’ll continue to spotlight other areas later this month, so keep an eye out and follow Android Developers on Twitter. Thanks so much for letting us be a part of this experience with you!

Java is a registered trademark of Oracle and/or its affiliates.

3 things to know about Jetpack Compose from Android Dev Summit 2019

Posted by Anna-Chiara Bellini, @dr0nequeen

Last month’s #AndroidDevSummit was jam-packed with announcements and technical news...so much that we wouldn’t be surprised if you missed something. So all this month, we’ll be diving into key areas from throughout the summit so you don’t miss anything. Earlier today, we spotlighted Kotlin and now we’re diving into Jetpack Compose, with the top three things you should know:

#1: Jetpack Compose is available in Developer Preview!

Jetpack Compose is Android’s modern toolkit for building native UI. It allows developers to write beautiful Android apps in an intuitive way, writing less code and accelerating development. It's powerful, because when writing code is a pleasure, you can focus on making your apps look beautiful and giving your users the best experience. At #AndroidDevSummit, we released the Developer Preview, to enable more feedback as we work towards bringing Jetpack Compose to beta next year. All you need to do is download the Android Studio 4.0 Canary build to try it out. To give feedback, feel free to reach out through the Kotlinlang Slack channel or our Bug Tracker to let us know what you think. It's your chance to help us build the right thing!

#2: See what's new in Jetpack Compose

At Android Dev Summit we showed how we've designed Compose to simplify development of Android apps, details on the new Material UI components we are building, and insights on some of the learnings that are informing the way we think about Compose. We also showcased how to write a small app, including layout and state management for a list of elements, in just a few lines of code.

Watch the Android Dev Summit session video to learn more:

If you want to take a look behind the scenes, we also had a tech deep dive into the inner workings of Compose:

#3: Try it out, with our tutorial, sample app, and codelab!

Jetpack Compose is still in Developer Preview, which means it's a great time to try it out and let us know what you think about it. To help you with that, you can follow our tutorial, which will take you through the first steps of building a Compose app, and also take a look at our sample app, Jetnews, that shows what is currently possible with Jetpack Compose. We also have a popular codelab available to take you through how to build UIs with Compose, how to manage state in composable functions, and data flow principles in Compose.

Jetnews sample app

...and we also heard from you!

But Android Dev Summit isn’t just about what we’ve got to say; it’s also about you telling us what you’d like to see worked on to make your life easier. And this year, one thing we heard strongly from our community was how important it is to have the right tools to manage your layouts on so many different form factors and devices. Jetpack Compose has full Android Studio support and you can iterate fast with live Previews.

Android Dev Summit is over for this year, but you can keep giving us your feedback through the Kotlinlang Slack channel and Bug Tracker.

You can find the entire playlist of Android Dev Summit sessions and videos here. We’ll continue to spotlight other areas later this month, so keep an eye out and follow AndroidDevelopers on Twitter. Thanks so much for letting us be a part of this experience with you!

One Biometric API Over all Android

Posted by Isai Damier, Android Developer Platform Engineering (@isaidamier)

Kevin Chyn, Android Framework

Curtis Belmonte, Android Framework

With the launch of Android 10 (API level 29), developers can now use the Biometric API, part of the AndroidX Biometric Library, for all their on-device user authentication needs. The Android Framework and Security team has added a number of significant features to the AndroidX Biometric Library, which makes all of the biometric behavior from Android 10 available to all devices that run Android 6.0 (API level 23) or higher. In addition to supporting multiple biometric authentication form factors, the API has made it much easier for developers to check whether a given device has biometric sensors. And if there are no biometric sensors present, the API allows developers to specify whether they want to use device credentials in their apps.

The features do not just benefit developers. Device manufacturers and OEMs have a lot to celebrate as well. The framework is now providing a friendly, standardized API for OEMs to integrate support for all types of biometric sensors on their devices. In addition, the framework has built-in support for facial authentication in Android 10 so that vendors don’t need to create a custom implementation.

A bit of background

The FingerprintManager class was introduced in Android 6.0 (API level 23). At the time -- and up until Android 9 (API level 28) -- the API provided support only for fingerprint sensors, and with no UI. Developers needed to build their own fingerprint UI.

Based on developer feedback, Android 9 introduced a standardized fingerprint UI policy. In addition, BiometricPrompt was introduced to encompass more sensors than just fingerprint. In addition to providing a safe, familiar UI for user authentication, it enabled a small, maintainable API surface for developers to access the variety of biometric hardware available on OEM devices. OEMs can now customize the UI with necessary affordances and iconography to expose new biometrics, such as outlines for in-display sensors. With this, app developers don’t need to worry about implementing customized, device-specific implementations for biometric authentication.

Then, in Android 10, the team introduced some pivotal features to turn the biometric API into a one-stop-shop for in-app user authentication. BiometricManager enables developers to check whether a device supports biometric authentication. Furthermore, the setDeviceCredentialAllowed() method was added to allow developers the option to use a device’s PIN/pattern/password instead of biometric credentials, if it makes sense for their app.

The team has now packaged every biometric feature you get in Android 10 into the androidx.biometric Gradle dependency so that a single, consistent, interface is available all the way back to Android 6.0 (API level 23).

How it works

The androidx.biometric Gradle dependency is a support library for the Android framework Biometric classes. On API 29 and above, the library uses the classes under android.hardware.biometrics, FingerprintManager back to API 23, and Confirm Credential all the way back to API 21. Because of the variety of APIs, we strongly recommend using the androidx support library regardless of which API level your app targets.

To use the Biometric API in your app, do the following.

1. Add the Gradle dependency to your app module

$biometric_version is the latest release of the library

def biometric_version= '1.0.0-rc02'
implementation "androidx.biometric:biometric:$biometric_version"

2. Check whether the device supports biometric authentication

The BiometricPrompt needs to be recreated every time the Activity/Fragment is created; this should be done inside onCreate() or onCreateView() so that BiometricPrompt.AuthenticationCallback can start receiving callbacks properly.

To check whether the device supports biometric authentication, add the following logic:

val biometricManager = BiometricManager.from(context)
if (biometricManager.canAuthenticate() == BiometricManager.BIOMETRIC_SUCCESS){
   // TODO: show in-app settings, make authentication calls.
}

3. Create an instance of BiometricPrompt

The BiometricPrompt constructor requires both an Executor and an AuthenticationCallback object. The Executor allows you to specify a thread on which your callbacks should be run.

The AuthenticationCallback has three methods:

  1. onAuthenticationSucceeded() is called when the user has been authenticated using a credential that the device recognizes.
  2. onAuthenticationError() is called when an unrecoverable error occurs.
  3. onAuthenticationFailed() is called when the user is rejected, for example when a non-enrolled fingerprint is placed on the sensor, but unlike with onAuthenticationError(), the user can continue trying to authenticate.

The following snippet shows one way of implementing the Executor and how to instantiate the BiometricPrompt:

private fun instanceOfBiometricPrompt(): BiometricPrompt {
   val executor = ContextCompat.getmainExecutor(context)

   val callback = object: BiometricPrompt.AuthenticationCallback() {
       override fun onAuthenticationError(errorCode: Int, errString: CharSequence) {
           super.onAuthenticationError(errorCode, errString)
           showMessage("$errorCode :: $errString")
       }

       override fun onAuthenticationFailed() {
           super.onAuthenticationFailed()
           showMessage("Authentication failed for an unknown reason")
       }

       override fun onAuthenticationSucceeded(result: BiometricPrompt.AuthenticationResult) {
           super.onAuthenticationSucceeded(result)
           showMessage("Authentication was successful")
       }
   }

   val biometricPrompt = BiometricPrompt(context, executor, callback)
   return biometricPrompt
}

Instantiating the BiometricPrompt should be done early in the lifecycle of your fragment or activity (e.g., in onCreate or onCreateView). This ensures that the current instance will always properly receive authentication callbacks.

4. Build a PromptInfo object

Once you have a BiometricPrompt object, you ask the user to authenticate by calling biometricPrompt.authenticate(promptInfo). If your app requires the user to authenticate using a Strong biometric or needs to perform cryptographic operations in KeyStore, you should use authenticate(PromptInfo, CryptoObject) instead.

This call will show the user the appropriate UI, based on the type of biometric credential being used for authentication – such as fingerprint, face, or iris. As a developer you don’t need to know which type of credential is being used for authentication; the API handles all of that for you.

This call requires a BiometricPrompt.PromptInfo object. A PromptInfo is where you define the text that appears in the prompt: such as title, subtitle, description. Without a PromptInfo, it is not clear to the end user which app is asking for their biometric credentials. PromptInfo also allows you to specify whether it’s OK for devices that do not support biometric authentication to grant access through the device credentials, such as password, PIN, or pattern that are used to unlock the device.

Here is an example of a PromptInfo declaration:

private fun getPromptInfo(): BiometricPrompt.PromptInfo {
   val promptInfo = BiometricPrompt.PromptInfo.Builder()
       .setTitle("My App's Authentication")
       .setSubtitle("Please login to get access")
       .setDescription("My App is using Android biometric authentication")
              .setDeviceCredentialAllowed(true)
       .build()
   return promptInfo
}

For actions that require a confirmation step, such as transactions and payments, we recommend using the default option -- setConfirmationRequired(true) -- which will add a confirmation button to the UI, as shown in Figure 2.

Figure 1. Example face authentication flow using BiometricPrompt with setConfirmationRequired(false).

Figure 2. Example face authentication flow using BiometricPrompt with setConfirmationRequired(true) (default behavior).

5. Ask the user to authenticate

Now that you have all the required pieces, you can ask the user to authenticate.

val canAuthenticate = biometricManager.canAuthenticate()
if (canAuthenticate == BiometricManager.BIOMETRIC_SUCCESS) {
   biometricPrompt.authenticate(promptInfo)
} else {
   Log.d(TAG, "could not authenticate because: $canAuthenticate")
}

And that’s it! You should now be able to perform authentication, using biometric credentials, on any device that runs Android 6.0 (API level 23) or higher.

Going forward

Because the ecosystem continues to evolve rapidly, the Android Framework team is constantly thinking about how to provide long-term support for both OEMs and developers. Given the biometric library’s consistent, system-themed UI, developers don’t need to worry about device-specific requirements, and users get a more trustworthy experience.

We welcome any feedback from developers and OEMs on how to make it more robust, easier to use, and supportive of a wider range of use cases.

For in-depth examples that showcase additional use cases and demonstrate how you might integrate this library into your app, check out our repo, which contains functional sample apps that make use of the library. You can also read the associated developer guide and API reference for more information.

A modern approach to Android development, with Jetpack Compose and more!

Posted by Stephanie Cuthbertson, Director, Product Management animated Android header image for blog post

Modern Android Development today

Perhaps as a consequence of Android's flexibility, we often get asked by developers what does the Android team recommend when it comes to building apps? You’ve told us: you love our openness..but you’d also love us to marry it with an opinion about the right way to do things. And, to make sure the right way is also the easiest way. And so today, at the Android Dev Summit, the team wanted to answer that question for you.

We call our recommendation “modern Android development". Opinionated and powerful, for fast, easy development. Taking away everything that slows you down so you can focus on building incredible experiences. You can see it in the investments we’ve made like creating Android Studio and Jetpack. (Over 90% of our pro developers are using Android Studio today.) Kotlin and Compose are especially great recent examples. Kotlin is a modern, concise language -- something you asked us for and is now the recommended language for Android. Compose is a modern declarative UI toolkit built for the next 10 years. And this might sound a little strange, but we chose and designed these tools to be enjoyable to use: we think that’s important too. Both Kotlin and Compose also have a critically important property. They are designed to be compatible with your existing apps. This means you can phase in Kotlin code and Compose views on your timeline.

It all starts with a great modern language: Kotlin

Modern Android starts with fantastic language support.In fact, we recently passed a milestone where almost 60% of our top 1000 apps are using Kotlin. And we’re working with JetBrains to make it even better: faster kotlin compile speeds, incremental annotation processing with KAPT, better IDE typing latency, more lint checks, desugaring in D8 and R8, new optimizations in R8 that are aware of Kotlin-specific bytecode patterns. And we’re releasing full IDE support for Kotlin build scripts today. If you want to grow your skills, we are launching an Advanced Android course with Kotlin on Udacity. And, for those who are already experts, we’re also launching a new Android Developer Certification in Kotlin, available at a discount for the next three months. We’re working to make all of our supported first-class languages — Kotlin, the Java programming language, and C++ — better for you and your team, with Java8 library desugaring, NDK r21 with updated LLVM, GNU Make, Fortify enabled by default, and more.

Jetpack: Build high quality, powerful apps with less code

Jetpack is designed to solve real-world problems you face every day, and is used by over 84% of the top 10,000 Play Store apps. And we continue to make Jetpack even more helpful:

  • Benchmarking, first announced at Google I/O, is now available as a release candidate. This library makes it easier to measure the performance of your app with confidence.
  • Viewbinding is an easier way to access Views from your code. It is a type-safe solution with minimal build-time impact, no more findViewById(), no more annotation processors.
  • CameraX simplifies the development experience and lets you focus on your app instead by addressing the differences between the many devices in the Android ecosystem, like Samsung, Xiaomi, Oppo, Motorola, LG who are already unifying behind CameraX. Previewed at Google I/O, CameraX will be available in Beta in December.

Compose: Android’s new UI toolkit to build beautiful, native apps, now in developer preview

Compose makes it easy to build beautiful, native apps. It provides a declarative way to build UIs which makes your code more intuitive and concise. Inspired by Kotlin, you can adopt Compose at your own pace thanks to seamless compatibility with the existing UI toolkit.

Today we are releasing the Jetpack Compose Developer Preview. All you need to do is download the latest Preview build of Android Studio. Compose is being developed completely in the open, in AOSP. The continuous feedback we receive has led to many API improvements and we want to thank you for providing feedback in our developer studies and the Kotlinlang Slack group. As we enter developer preview, we need even more feedback as we work towards bringing Jetpack Compose to beta next year and ready for use in production apps.

Android Studio 4.0 Canary

Today we also released the first canary of Android Studio 4.0 - built hand in hand with Compose for powerful, integrated tooling support. Android Studio 4.0 includes Compose Live Preview, Code Completion, and a full sample of a Compose app. You’ll also find the new Motion Editor, Java 8 Language library desugaring, full support for KTS files, Kotlin live templates, and more.

Android App Bundles and dynamic delivery testing improvements

In just eighteen months after launch, over 270K Android App Bundles are now in production covering 25% of all active installs. Based on your feedback, we’re making App Bundles and Dynamic Delivery much easier to test. Internal app sharing lets you share test builds of your app bundle as easily as you share APKs. You can now grant anyone in the team ability to upload artifacts. You don’t need to sign test versions with your production app signing key, you don’t need to use version codes, and you can upload debuggable artifacts. We’re also making it possible to get download links for old versions of your app from the Play Console, whether they’re app bundles or APKs. And starting today, we’re launching offline testing of dynamic delivery with the fake split install manager so can replicate splits being installed by the Play Store while testing locally.

A modern distribution platform, centered around user trust

User trust and safety has always been a top priority at Google Play, with human reviewers, constant improvements to play protect, and policy updates to evolve with the threats we see. As a result, apps that are installed from Google Play are an order of magnitude safer than from any other source. This year, we’ve been increasing all our detection capabilities for impersonators, repackaging, bad content and other forms of abuse, but we know there's more we can do, and the threats are constantly changing. With your help, we’ve reduced access to sensitive data and have made Play even safer for children and families. We restricted SMS/Call log permissions to only apps that need them as part of their core functionality, and as a result 98% fewer apps access this sensitive data. Thanks to your hard work, users are safer, and know they are safer when they download apps that request fewer permissions.

The Android Developer Challenge!

Over ten years ago, we announced the first Android Developer Challenge. Today, modern Android is shaping the next generation platform. So it seems kind of fitting to announce: the Android Developer Challenge is back! The first Developer Challenge we’re announcing is Helpful Innovation and Machine Learning. Take Live Captions: for the almost 500 million people who are deaf and hard of hearing, Live Captions bring content to life and is exactly the type of machine learning-powered innovation we expect to see more of someday, and with your help we can turn someday into today. You can read more about the challenge here.

So - that’s a quick tour of Modern Android and the road ahead across our developer experience! Whether you’re joining us for Android Dev Summit in person or on the livestream, we have nearly 60 sessions from over 100 speakers where we’ll go deep on everything you need to know about Android. Thank you!

Android Developer Challenge: helpful innovation, powered by On-Device Machine Learning + you!

Posted by The Android team

Android Developer Challenge banner

Developers like you have always played an important role in shaping the direction of Android, fueling the wave of Android innovation. It’s the reason that when we first launched the SDK for Android 10+ years ago, we simultaneously announced the Android Developer Challenge: a way to help reward model apps and show us what user problems you wanted to solve. As Android continues to push the boundaries into emerging areas like ML, 5G, foldables and more, we need your help to bring to life the consumer experiences that will define these new frontiers.

So we’re bringing back the Android Developer Challenge and asking you to help us unlock new experiences on Android, and help inspire other developers around these emerging technologies.

As we kick off this challenge, the first area we’ll be focusing on is On-Device Machine Learning. At Google, we’re big believers in how this new technology can open up a world of helpful innovation so you can get things done in ways you never thought possible. Take Live Captions: for the almost 500 million people who are deaf and hard of hearing, Live Captions bring content to life and is exactly the type of machine learning-powered innovation we expect to see more of someday, and with your help we can turn someday into today!

Bringing your idea to life in front of billions of eyes

Got an idea? Whether it’s still a concept or ready for users, tell us how you could use Google’s help, and how it supports the mission of using machine learning to help people get something done. Join the #AndroidDeveloperChallenge topic on GitHub, and share your idea as a repository under this topic. Don’t forget to come back here and officially submit your concept.

We’ll pick 10 concepts and provide expertise and guidance to those developers to help in their plans to bring their ideas to fruition. And once the app is ready, we’ll help showcase it in front of the billions of users on Google Play, through a collection and more. Here’s what those 10 developers will get:

Expertise and development support bootcamp: We’ll work with you to provide expertise and guidance to help in your plans to bring your app from concept to reality, including:

  • An all-expenses paid, working session with a panel of experts at Google HQ in Mountain View, CA
  • Google engineer mentorship at the bootcamp, providing guidance and technical expertise on how to help your plans to bring your app to fruition

Exposure and street cred! Once your idea is ready for prime-time, we’ll help you get users, and celebrate you to the broader Android community, including:

  • A collection on Google Play where we’ll feature your app (apps must be ready for Google Play on May 1, and must meet Google’s minimum quality requirements)
  • Tickets to Google I/O 2020
  • And we’ll celebrate these experiences to the broader Android developer community on developers.android.com. We might even showcase you at Google I/O, in places like the sandbox, sessions, perhaps even a keynote!

Helpful innovation is an important investment area for us on the Android team, and On-Device Machine Learning has played a critical role in powering new features in the last several releases of Android. We’re just beginning to scratch the surface, and we can’t wait to see what you come up with!

Here’s how to watch the 2019 Android Dev Summit!

We’re less than 24 hours away from kicking off the 2019 Android Dev Summit, broadcasting live from the Google Events Center (MP7) in Sunnyvale, CA on October 23 & 24. We’ll be broadcasting the entire two days of the event live on YouTube, Twitter and on the Android Dev Summit website. Here’s what you need to know:

The keynote, sessions, sandbox and more, kicking off at 10AM PDT

The summit kicks off on October 23 at 10 a.m. PDT with a keynote, where you'll hear from Dave Burke, VP Engineering for Android, Stephanie Cuthbertson, Director of Product Management and others on the present and future of Android development. From there, we'll dive into two days of deep technical content from the Android engineering team, on topics such as Android platform, Android Studio, Android Jetpack, Kotlin, Google Play, and more. The full agenda is here so you can plan your summit experience.

#AskAndroid your pressing questions!

Tweet us your best questions using the hashtag #AskAndroid in the lead-up to the Android Dev Summit. We’ve gathered experts from Jetpack to Kotlin to Android 10, so we’ve got you covered. We’ll be answering your questions live between sessions on the livestream. Plus, we will share updates directly from the Google Events Center to our social channels, so be sure to follow along!

Previewing #AndroidDevSummit: Sessions, App, & Livestream Details

Posted by The #AndroidDevSummit team

In two weeks, we'll be welcoming Android developers from around the world at Android Dev Summit 2019, broadcasting live from the Google Events Center (MP7) in Sunnyvale, CA on October 23 & 24. Whether you’re joining us in person or via the livestream, we’ve got a great show planned for you; starting today, you can read more details about the keynote, sessions, codelabs, sandbox, the mobile app, and how online viewers can participate.

The keynote, sessions, sandbox and more, kicking off at 10AM PDT

The summit kicks off on October 23 at 10 a.m. PDT with a keynote, where you'll hear from Dave Burke, VP Engineering for Android, and others on the present and future of Android development. From there, we'll dive into two days of deep technical content from the Android engineering team, on topics such as Android platform, Android Studio, Android Jetpack, Kotlin, Google Play, and more.

The full agenda is now available, so you can start to plan your summit experience. We'll also have demos in the sandbox, hands-on learning with codelabs, plus exclusive content for those watching on the livestream.

Get the Android Dev Summit app on Google Play!

The official app for Android Dev Summit 2019 has started rolling out on Google Play. With the app, you can explore the agenda by searching through topics and speakers. Plan your summit experience by saving events to your personalized schedule. You’ll be able to watch the livestream and find recordings after sessions occur, and more. (By the way, the app is also an Instant app, so with one tap you can try it out first before installing!)

2019 #AndroidDevSummit app

A front-row seat from your desk, and #AskAndroid your pressing questions!

We’ll be broadcasting live on YouTube and Twitter starting October 23 at 10 a.m. PDT. In addition to a front row seat to over 25 Android sessions, there will be exclusive online-only content and an opportunity to have your most pressing Android development questions answered live, broadcast right from the event.

Tweet us your best questions using the hashtag #AskAndroid in the lead-up to the Android Dev Summit. We’ve gathered experts from Jetpack to Kotlin to Android 10, so we’ve got you covered. We’ll be answering your questions live between sessions on the livestream. Plus, we will share updates directly from the Google Events Center to our social channels, so be sure to follow along!

Previewing #AndroidDevSummit: Sessions, App, & Livestream Details

Posted by The #AndroidDevSummit team

In two weeks, we'll be welcoming Android developers from around the world at Android Dev Summit 2019, broadcasting live from the Google Events Center (MP7) in Sunnyvale, CA on October 23 & 24. Whether you’re joining us in person or via the livestream, we’ve got a great show planned for you; starting today, you can read more details about the keynote, sessions, codelabs, sandbox, the mobile app, and how online viewers can participate.

The keynote, sessions, sandbox and more, kicking off at 10AM PDT

The summit kicks off on October 23 at 10 a.m. PDT with a keynote, where you'll hear from Dave Burke, VP Engineering for Android, and others on the present and future of Android development. From there, we'll dive into two days of deep technical content from the Android engineering team, on topics such as Android platform, Android Studio, Android Jetpack, Kotlin, Google Play, and more.

The full agenda is now available, so you can start to plan your summit experience. We'll also have demos in the sandbox, hands-on learning with codelabs, plus exclusive content for those watching on the livestream.

Get the Android Dev Summit app on Google Play!

The official app for Android Dev Summit 2019 has started rolling out on Google Play. With the app, you can explore the agenda by searching through topics and speakers. Plan your summit experience by saving events to your personalized schedule. You’ll be able to watch the livestream and find recordings after sessions occur, and more. (By the way, the app is also an Instant app, so with one tap you can try it out first before installing!)

2019 #AndroidDevSummit app

A front-row seat from your desk, and #AskAndroid your pressing questions!

We’ll be broadcasting live on YouTube and Twitter starting October 23 at 10 a.m. PDT. In addition to a front row seat to over 25 Android sessions, there will be exclusive online-only content and an opportunity to have your most pressing Android development questions answered live, broadcast right from the event.

Tweet us your best questions using the hashtag #AskAndroid in the lead-up to the Android Dev Summit. We’ve gathered experts from Jetpack to Kotlin to Android 10, so we’ve got you covered. We’ll be answering your questions live between sessions on the livestream. Plus, we will share updates directly from the Google Events Center to our social channels, so be sure to follow along!

Continuous testing with new Android emulator tools

Posted by Lingfeng Yang, Android Studio team

Developers often use the Android Emulator during their day-to-day development to quickly test the latest changes before they are being committed. In addition, developers are increasingly using the emulator in their continuous integration (CI) systems to run a larger suite of automated tests. To better support this use-case, we are open sourcing the Android Emulator Container Scripts and improving the developer experiences around two pain points:

  1. Deployability - finding and running the desired version of Android Emulator.
  2. Debuggability - tracking down bugs from remote instances of Android Emulator.

Deployability

Android supports a wide variety of hardware and software configurations, and the Android Emulator is no different. However, this wide variety can create confusion over environment configurations. How should developers obtain emulators and system images? What drivers are required? How do you run with or without CPU or GPU acceleration? (etc. etc.)

To address this we have launched:

  • Android Emulator Download Script - This script provides the current up-to-date lists of emulator images (both AOSP and with Google Play Services) as well as emulators binaries (supporting Linux, Mac OS and Windows). You can integrate this with your existing continuous integration system. Going forward, we aim to enhance this service to enable downloading of deprecated versions in addition to the latest versions to make it easier to reproduce historical test results.
  • Android Emulator Docker image generator - Android system images and the emulator is only one part of the story. For environment, drivers, and pre-installed system dependencies, we put together a Docker image generator. This creates the complete environment in which the Android Emulator runs. After you start up the Docker image, 1) port forwarding and ADB, or 2) gRPC and WebRTC, makes interaction with the emulator possible. Currently, the Docker image generator is designed to work in Linux. We are also looking at Mac OS and Windows hosts, so stay tuned!

To increase reproducibility, the underlying Dockerfile template makes the required command line flags and system dependencies more explicit (and reproducible via building Docker images from them). For hardware acceleration, note the --privileged flag that is passed to run.sh; we assume CPU acceleration is available when running the emulator, and --privileged is needed to run the containers with CPU acceleration (KVM) enabled.

For more details on how to create and deploy the Android Emulator image, go to the README.

Debuggability

When the emulator is running and a test or the emulator fails, it can be difficult to dive into the running environment and diagnose the error. Often, diagnosis requires direct interaction with the virtual device. We provide two mechanisms for direct interaction:

  1. ADB
  2. Remote streaming

In the case of ADB, we allow all commands, such as logcat and shell, by forwarding a particular port from the Docker guest to the host. Because the current port is 5555, we'll need to collect more feedback and do more research on how best to separate ports across different containers.

Remote streaming

Security note: With remote streaming, keep in mind that once the service is started, anyone who can connect to your computer on port 80/443 can interact with the emulator. So be careful with running this on a public server!

With remote streaming, you can run the emulator in a container, which is as interactive as running locally. Running the emulator in a container makes it easier to debug issues that can be hard to discover using ADB commands. You can access the emulator using a browser with WebRTC, which is used to stream the video, and gRPC, which is used to send mouse and keyboard events to the emulator. Remote streaming requires three containers:

  1. A container that hosts the latest emulator
  2. A container with an Envoy web proxy needed for gRPC
  3. A container with nginx to serve the React web app

You can compose the Docker containers together using docker-compose, as described in the README. The containers bind to port 80 and 443, so make sure you do not have a web server running. A self-signed certificate will be offered if you point the browser to the host. If you point your browser to the host you should see something like the image below:

Again, keep in mind that anyone who can connect to your host can interact with the emulator. So be careful with running this on a public server!

Let’s scale testing!

Testing can seem to be a tax on development time. However, as many seasoned developers have seen, proper automated testing can increase development velocity as the code base becomes bigger and more complex. Continuous testing should give you confidence that the change you make won’t break your app.

Make stronger decisions with new Google Play Console data

Posted by Tom Grinsted, Product Manager, Google Play

At this year’s Google I/O, we announced a slate of new features to help you take your business further with Google Play. Launching today, these changes include several improvements designed to help you make better decisions about your business by providing clearer, more actionable data.

We know the right data is critical to help you improve your app performance and grow your business. That’s why we’re excited to share a major update that enables you to better measure and analyse your core statistics — the most fundamental install and uninstall metrics by user and device. We’ve also enhanced the Statistics page on the Play Console to show change over time, enable more granular configurations, and, coming soon, exclusive benchmarks for core stats!

Statistics page on the Play Console

More granular configurations are now available on the Statistics page to help you better understand your acquisition and churn.

More accurate and more expansive than before, the new metrics will help you better understand your acquisition and churn. For the first time, we are including data on returning users and devices - something that we understand is critical to many developers' growth strategies.

We’re also including new install methods (such as pre-installs and peer-to-peer sharing) and the ability to aggregate and dedupe over periods that suit your business needs. With these new updates, you can perform analyses that weren’t possible before, such as how many people re-installed your app last month.

Here’s what else is new:

  • Clearer, consistent metrics definitions:
    • Select users or devices, acquisitions or losses
    • Define if you’re interested in new, returning, or all users
    • Measure events (for example, when someone installs) or uniques (for instance, every person who installs)
  • Change analysis charts automatically show the largest changes during a selected period of time for a given dimension, making it easy to see the largest contributors to your metric trends.
  • Saved reports allow you to configure your metrics just the way you want them, then save them for easy retrieval and common analyses.
  • Suggested reports help you to find interesting ways to combine your data for more valuable analysis.
  • And finally, all configured data can be downloaded as CSVs from within the interface.

As a result of these updates, you will notice a few changes to your metrics. Old metrics names will be deprecated, but you can configure new metrics that map to the old ones with this cheat sheet. And don’t forget to use the ‘save report’ feature on the stats page so you can easily return to any configurations you find particularly helpful!

Save report feature on the stats page

Don’t forget to use the ‘save this report’ feature on the stats page to easily return to any configurations you find particularly helpful.

Other metrics like active user and active device will see a step-change as the new definitions are more expansive and include previously under-counted data.

Some new metrics map onto older ones. Where this happens, all historic data will be automatically included. But in other cases new metrics will only be generated from launch day. For unique devices or users, weekly metrics will start to appear two weeks after launch, monthly metrics once there’s a single full month’s data, and quarterly metrics once there’s a full quarter’s data.

We know it’s a lot to take in at once, so make sure to bookmark the cheat sheet for helpful tips as you navigate the transition and explore your new metrics. Additionally, our Decision-Making with the Google Play Console session from Google I/O and our Play Academy training are other great resources to help you get up to speed.

Check out these updates in the Google Play Console today — we hope you find them useful. Your comments help to shape the future of Google Play, so please continue to let us know what you think.

How useful did you find this blog post?