Tag Archives: Android

Android Studio Ladybug Feature Drop is Stable!

Posted by Steven Jenkins – Product Manager, Android Studio

Today, we are thrilled to announce the stable release of Android Studio Ladybug 🐞 Feature Drop (2024.2.2)!

Accelerate your productivity with Gemini in Android Studio, Animation Preview support for Wear Tiles, App Links Assistant and much more. All of these new features are designed to help you build high-quality Android apps faster.

Read on to learn more about all the updates, quality improvements, and new features across your key workflows in Android Studio Ladybug Feature Drop, and download the latest stable version today to try them out!

Android Studio Ladybug Feature Drop

Gemini in Android Studio

Gemini Code Transforms

Gemini Code Transforms can help you modify, optimize, or add code to your app with AI assistance. Simply right-click in your code editor and select "Gemini > Generate code" or highlight code and select "Gemini > Transform selected code." You can also use the keyboard shortcut Ctrl+\ (⌘+\ on macOS) to bring up the Gemini prompt. Describe the changes you want to make to your code, and Gemini will suggest a code diff, allowing you to easily review and accept only the suggestions you want.

With Gemini Code Transforms, you can simplify complex code, perform specific code transformations, or even generate new functions. You can also refine the suggested code to iterate on the code suggestions with Gemini. It's an AI coding assistant right in your editor, helping you write better code more efficiently.

Android Studio displays a code editor window open to Gemini Code Transform
Gemini Code Transform

Rename

Gemini in Android Studio enhances your workflow with intelligent assistance for common tasks. When renaming a single variable, class, or method from the code editor, the "Refactor > Rename" action uses Gemini to suggest contextually appropriate names, making it smoother and more efficient to refactor names as you’re coding in the editor.

A code editor window open to Gemini renaming a variable in Android Studio
Rename

Rethink

For larger renaming refactors, Gemini can "Rethink variable names" across your whole file. This feature analyzes your code and suggests more intuitive and descriptive names for variables and methods, improving readability and maintainability.

A code editor window open to Gemini analyzing code and suggesting more descriptive names for variables in Android Studio
Rethink

Commit Message

Gemini now assists with commit messages. When committing changes to version control, it analyzes your code modifications and suggests a detailed commit message.

A code editor window open to Gemini analyzing code and suggesting a detailed commit message in Android Studio
Commit Message

Generate Documentation

Gemini in Android Studio makes documenting your code easier than ever. To generate clear and concise documentation, select a code snippet, right-click in the editor and choose "Gemini > Document Function" (or "Document Class" or "Document Property", depending on the context). Gemini will generate a draft that you can then refine and perfect before accepting the changes. This streamlined process helps you create informative documentation quickly and efficiently.

A code editor window open to Gemini adding documentation to a code snippet in Android Studio
Generate Documentation

Debug

Animation Preview support for Wear OS Tiles

Animation Preview support for Wear OS Tiles helps you visualize and debug tile animations with ease. It provides a real-time view of your animations, allowing you to preview them, control playback with options like play, pause, and speed adjustment, and inspect key properties such as initial/end states and animation curves. You can even dynamically modify animation code and instantly observe the results within the inspector, streamlining the debugging and refinement process.

A code editor window open to animation preview support in Android Studio
Animation Preview support for Wear OS Tiles

Wear Health Services

The Wear Health Services feature in Android Studio simplifies the process of testing health and fitness apps by enabling Wear Health Services within the emulator. You can now easily customize various parameters for a given exercise such as heart rate, distance, and speed without needing a physical device or performing the activity itself. This streamlines the development and testing workflow, allowing for faster iteration and more efficient debugging of health-related features.

A code editor window open to Wear Health Services in Android Studio Emulator
Wear Health Services

Optimize

App Links Assistant

App Links Assistant simplifies the process of implementing app links by serving valid JSON syntax that resolves broken deep links for your app. You can review the JSON file and then upload it to your website, resolving issues quickly. This eliminates the manual creation of the JSON file, saving you time and effort. The tool also allows you to compare existing JSON files with newly generated ones to easily identify any discrepancies.

A code editor window open to App Links Assistant in Android Studio
App Links Assistant

Google Play SDK Insights Integration

Android Studio now provides enhanced lint warnings for public SDKs from the Google Play SDK Index and the Google Play SDK Console, helping you identify and address potential issues. These warnings alert you if an SDK is outdated, violates Google Play policies, or has known security vulnerabilities. Furthermore, Android Studio provides helpful quick fixes and recommended version ranges whenever possible, making it easier to update your dependencies and keeping your app more secure and compliant.

Android Studio displays a code editor window open to a Gradle build file. The IDE warns that an outdated Firebase authentication library is being used, preventing release to Google Play Console.
Google Play SDK Insights Integration

Quality improvements

Beyond new features, we also continued to improve the overall quality and stability of Android Studio. In fact, the Android Studio team addressed over 770 bugs during the Ladybug Feature Drop development cycle.

IntelliJ platform update

Android Studio Ladybug Feature Drop (2024.2.2) includes the IntelliJ 2024.2 platform release, which has many new features such as more intuitive full line code completion suggestions, a preview in the Search Everywhere dialog and improved log management for the Java** and Kotlin programming languages.

See the full IntelliJ 2024.2 release notes.

Summary

To recap, Android Studio Ladybug Feature Drop includes the following enhancements and features:

Gemini in Android Studio

    • Gemini Code Transforms
    • Rename
    • Rethink
    • Commit Message
    • Generate Documentation

Debug

    • Animation Preview support for Wear OS Tiles
    • Wear Health Services

Optimize

    • App Links Assistant
    • Google Play SDK Insights Integration

Quality Improvements

    • 770+ bugs addressed

IntelliJ Platform Update

    • More intuitive full line code completion suggestions
    • Preview in the Search Everywhere dialog
    • Improved log management for Java and Kotlin programming languages

Getting Started

Ready for next-level Android development? Download Android Studio Ladybug Feature Drop and unlock these cutting-edge features today. As always, your feedback is important to us – check known issues, report bugs, suggest improvements, and be part of our vibrant community on LinkedIn, Medium, YouTube, or X. Let's build the future of Android apps together!


**Java is a trademark or registered trademark of Oracle and/or its affiliates.

Performance Class helps Google Maps deliver premium experiences

Posted by Nevin Mital - Developer Relations Engineer, Android Media

The Android ecosystem features a diverse range of devices, and it can be difficult to build experiences that take advantage of new or premium hardware features while still working well for users on all devices. With Android 12, we introduced the Media Performance Class (MPC) standard to help developers better understand a device’s capabilities and identify high-performing devices. For a refresher on what MPC is, please see our last blog post, Using performance class to optimize your user experience, or check out the Performance Class documentation.

Earlier this year, we published the first stable release of the Jetpack Core Performance library as the recommended solution for more reliably obtaining a device’s MPC level. In particular, this library introduces the PlayServicesDevicePerformance class, an API that queries Google Play Services to get the most up-to-date MPC level for the current device and build. I’ll get into the technical details further down, but let’s start by taking a look at how Google Maps was able to tailor a feature launch to best fit each device with MPC.

Performance Class unblocks premium experience launch for Google Maps

Google Maps recently took advantage of the expanded device coverage enabled by the Play Services module to unblock a feature launch. Google Maps wanted to update their UI by increasing the transparency of some layers. Consequently, this meant they would need to render more of the map, and found they had to stop the rollout due to latency increases on many devices, especially towards the low-end. To resolve this, the Maps team started by slicing an existing key metric, “seconds to UI item visibility”, by MPC level, which revealed that while all devices had a small increase in this latency, devices without an MPC level had the largest increase.

A bar graph displays A/B test results for Seconds to UI item visibility, comparing control results with those using increased transparency across different Media Performance Class Levels.  A green horizontal line and text indicate the updated experience shipped to devices qualifying for MPC. A vertical green dotted line separates results for devices without a specific MPC level, which kept the previous UI.

With these results in hand, Google Maps started their rollout again, but this time only launching the feature on devices that report an MPC level. As devices continue to get updated and meet the bar for MPC, the updated Google Maps UI will be available to them as well.

The new Play Services module

MPC level requirements are defined in the Android Compatibility Definition Document (CDD), then devices and Android builds are validated against these requirements by the Android Compatibility Test Suite (CTS). The Play Services module of the Jetpack Core Performance library leverages these test results to continually update a device’s reported MPC level without any additional effort on your end. This also means that you’ll immediately have access to the MPC level for new device launches without needing to acquire and test each device yourself, since it already passed CTS. If the MPC level is not available from Google Play Services, the library will fall back to the MPC level declared by the OEM as a build constant.

A flowchart depicts the process of determining Performance Class levels for Android devices, involving manufacturers, CTS tests, a Grader, the Play Services module, and the CDD.

As of writing, more than 190M in-market devices covering over 500 models across 40+ brands report an MPC level. This coverage will continue to grow over time, as older devices update to newer builds, from Android 11 and up.

Using the Core Performance library

To use Jetpack Core Performance, start by adding a dependency for the relevant modules in your Gradle configuration, and create an instance of DevicePerformance. Initializing a DevicePerformance should only happen once in your app, as early as possible - for example, in the onCreate() lifecycle event of your Application. In this example, we’ll use the Google Play services implementation of DevicePerformance.

// Implementation of Jetpack Core library.
implementation("androidx.core:core-ktx:1.12.0")
// Enable APIs to query for device-reported performance class.
implementation("androidx.core:core-performance:1.0.0")
// Enable APIs to query Google Play Services for performance class.
implementation("androidx.core:core-performance-play-services:1.0.0")

import androidx.core.performance.play.services.PlayServicesDevicePerformance

class MyApplication : Application() {
  lateinit var devicePerformance: DevicePerformance

  override fun onCreate() {
    // Use a class derived from the DevicePerformance interface
    devicePerformance = PlayServicesDevicePerformance(applicationContext)
  }
}

Then, later in your app when you want to retrieve the device’s MPC level, you can call getMediaPerformanceClass():

class MyActivity : Activity() {
  private lateinit var devicePerformance: DevicePerformance
  override fun onCreate(savedInstanceState: Bundle?) {
    super.onCreate(savedInstanceState)
    // Note: Good app architecture is to use a dependency framework. See
    // https://developer.android.com/training/dependency-injection for more
    // information.
    devicePerformance = (application as MyApplication).devicePerformance
  }

  override fun onResume() {
    super.onResume()
    when {
      devicePerformance.mediaPerformanceClass >= Build.VERSION_CODES.UPSIDE_DOWN_CAKE -> {
        // MPC level 34 and later.
        // Provide the most premium experience for the highest performing devices.
      }
      devicePerformance.mediaPerformanceClass == Build.VERSION_CODES.TIRAMISU -> {
        // MPC level 33.
        // Provide a high quality experience.
      }
      else -> {
        // MPC level 31, 30, or undefined.
        // Remove extras to keep experience functional.
      }
    }
  }
}

Strategies for using Performance Class

MPC is intended to identify high-end devices, so you can expect to see MPC levels for the top devices from each year, which are the devices you’re likely to want to be able to support for the longest time. For example, the Pixel 9 Pro released with Android 14 and reports an MPC level of 34, the latest level definition when it launched.

You should use MPC as a complement to any existing Device Clustering solutions you already use, such as querying a device’s static specs or manually blocklisting problematic devices. An area where MPC can be a particularly helpful tool is for new device launches. New devices should be included at launch, so you can use MPC to gauge new devices’ capabilities right from the start, without needing to acquire the hardware yourself or manually test each device.

A great first step to get involved is to include MPC levels in your telemetry. This can help you identify patterns in error reports or generally get a better sense of the devices your user base uses if you segment key metrics by MPC level. From there, you might consider using MPC as a dimension in your experimentation pipeline, for example by setting up A/B testing groups based on MPC level, or by starting a feature rollout with the highest MPC level and working your way down. As discussed previously, this is the approach that Google Maps took.

You could further use MPC to tune a user-facing feature, for example by adjusting the number of concurrent video playbacks your app attempts based on the MPC level’s concurrent codec guarantees. However, make sure to still query a device’s runtime capabilities when using this approach, as they may differ depending on the environment and state the device is in.

Get in touch!

If MPC sounds like it could be useful for your app, please give it a try! You can get started by taking a look at our sample code or documentation. We welcome you to share any questions or feedback you have in this short form.


This blog post is a part of Camera and Media Spotlight Week. We're providing resources – blog posts, videos, sample code, and more – all designed to help you uplevel the media experiences in your app.

To learn more about what Spotlight Week has to offer and how it can benefit you, be sure to read our overview blog post.

Spotlight Week: Android Camera and Media

Posted by Caren Chang- Android Developer Relations Engineer

Android offers Camera and Media APIs to help you build apps that can capture, edit, share, and play media. To help you enhance Android Camera and Media experiences to be even more delightful for your users, this week we will be kicking off the Camera and Media Spotlight week.

This Spotlight Week will provide resources—blog posts, videos, sample code, and more—all designed to help you uplevel the media experiences in your app. Check out highlights from the latest releases in Camera and Media APIs, including better Jetpack Compose support in CameraX, motion photo support in Media3 Transformer, simpler ExoPlayer setup, and much more! We’ll also bring in developers from the community to talk about their experiences building Android camera and media apps.


Here’s what we’re covering during Camera and Media Spotlight week:

What’s new in camera and media

Tuesday, January 7

Check out what’s new in the latest CameraX and Media3 releases, including how to get started with building Camera apps with Compose.

Creating delightful and premium experiences

Wednesday, January 8

Building delightful and premium experiences for your users is what can help your app really stand out. Learn about different ways to achieve this such as utilizing the Media Performance Class or enabling HDR video capture in your app. Learn from developers, such as how Google Drive enabled Ultra HDR images in their Android app, and Instagram improved the in-app image capture experience by implementing Night Mode.

Adaptive for camera and media, for large screens and now XR!

Thursday, January 9

Thinking adaptive is important, so your app works just as well on phones as it does large screens, like foldables, tablets, ChromeOS, cars, and the new Android XR platform! On Thursday, we’ll be diving into the media experience on large screen devices, and how you can build in a smooth tabletop mode for your camera applications. Prepare your apps for XR devices by considering Spatial Audio and Video.

Media creation

Friday, January 10

Capturing, editing, and processing media content are fundamental features of the Android ecosystem. Learn about how Media3’s Transformer module can help your app’s media processing use cases, and see case studies of apps that are using Transformer in production. Listen in to how the 1 Second Everyday Android app approaches media use cases, and check out a new API that allows apps to capture concurrent camera streams.Learn from Android Google Developer Tom Colvin on how he experimented with building an AI-powered Camera app.


These are just some of the things to think about when building camera and media experiences in your app. Keep checking this blog post for updates; we’ll be adding links and more throughout the week.

Media3 1.5.0 — what’s new?

Posted by Kristina Simakova – Engineering Manager

This article is cross-published on Medium

Media3 1.5.0 is now available!

Transformer now supports motion photos and faster image encoding. We’ve also simplified the setup for DefaultPreloadManager and ExoPlayer, making it easier to use. But that’s not all! We’ve included a new IAMF decoder, a Kotlin listener extension, and easier Player optimization through delegation.

To learn more about all new APIs and bug fixes, check out the full release notes.

Transformer improvements

Motion photo support

Transformer now supports exporting motion photos. The motion photo’s image is exported if the corresponding MediaItem’s image duration is set (see MediaItem.Builder().setImageDurationMs()) Otherwise, the motion photo’s video is exported. Note that the EditedMediaItem’s duration should not be set in either case as it will automatically be set to the corresponding MediaItem’s image duration.

Faster image encoding

This release accelerates image-to-video encoding, thanks to optimizations in DefaultVideoFrameProcessor.queueInputBitmap(). DefaultVideoFrameProcessor now treats the Bitmap given to queueInputBitmap() as immutable. The GL pipeline will resample and color-convert the input Bitmap only once. As a result, Transformer operations that take large (e.g. 12 megapixels) images as input execute faster.

AudioEncoderSettings

Similar to VideoEncoderSettings, Transformer now supports AudioEncoderSettings which can be used to set the desired encoding profile and bitrate.

Edit list support

Transformer now shifts the first video frame to start from 0. This fixes A/V sync issues in some files where an edit list is present.

Unsupported track type logging

This release includes improved logging for unsupported track types, providing more detailed information for troubleshooting and debugging.

Media3 muxer

In one of the previous releases we added a new muxer library which can be used to create MP4 container files. The media3 muxer offers support for a wide range of audio and video codecs, enabling seamless handling of diverse media formats. This new library also brings advanced features including:

    • B-frame support
    • Fragmented MP4 output
    • Edit list support

The muxer library can be included as a gradle dependency:

implementation ("androidx.media3:media3-muxer:1.5.0")

Media3 muxer with Transformer

To use the media3 muxer with Transformer, set an InAppMuxer.Factory (which internally wraps media3 muxer) as the muxer factory when creating a Transformer:

val transformer = Transformer.Builder(context)
    .setMuxerFactory(InAppMuxer.Factory.Builder().build())
    .build()

Simpler setup for DefaultPreloadManager and ExoPlayer

With Media3 1.5.0, we added DefaultPreloadManager.Builder, which makes it much easier to build the preload components and the player. Previously we asked you to instantiate several required components (RenderersFactory, TrackSelectorFactory, LoadControl, BandwidthMeter and preload / playback Looper) first, and be super cautious on correctly sharing those components when injecting them into the DefaultPreloadManager constructor and the ExoPlayer.Builder. With the new DefaultPreloadManager.Builder this becomes a lot simpler:

    • Build a DefaultPreloadManager and ExoPlayer instances with all default components.
val preloadManagerBuilder = DefaultPreloadManager.Builder()
val preloadManager = preloadManagerBuilder.build()
val player = preloadManagerBuilder.buildExoPlayer()

    • Build a DefaultPreloadManager and ExoPlayer instances with custom sharing components.
val preloadManagerBuilder = DefaultPreloadManager.Builder().setRenderersFactory(customRenderersFactory)
// The resulting preloadManager uses customRenderersFactory
val preloadManager = preloadManagerBuilder.build()
// The resulting player uses customRenderersFactory
val player = preloadManagerBuilder.buildExoPlayer()

    • Build a DefaultPreloadManager and ExoPlayer instances, while setting the custom playback-only configurations on the ExoPlayers.
val preloadManagerBuilder = DefaultPreloadManager.Builder()
val preloadManager = preloadManagerBuilder.build()
// Tune the playback-only configurations
val playerBuilder = ExoPlayer.Builder().setFooEnabled()
// The resulting player will have playback feature "Foo" enabled
val player = preloadManagerBuilder.buildExoPlayer(playerBuilder)

Preloading the next playlist item

We’ve added the ability to preload the next item in the playlist of ExoPlayer. By default, playlist preloading is disabled but can be enabled by setting the duration which should be preloaded to memory:

player.preloadConfiguration =
    PreloadConfiguration(/* targetPreloadDurationUs= */ 5_000_000L)

With the PreloadConfiguration above, the player tries to preload five seconds of media for the next item in the playlist. Preloading is only started when no media is being loaded that is required for the ongoing playback. This way preloading doesn’t compete for bandwidth with the primary playback.

When enabled, preloading can help minimize join latency when a user skips to the next item before the playback buffer reaches the next item. The first period of the next window is prepared and video, audio and text samples are preloaded into its sample queues. The preloaded period is later queued into the player with preloaded samples immediately available and ready to be fed to the codec for rendering.

Once opted-in, playlist preloading can be turned off again by using PreloadConfiguration.DEFAULT to disable playlist preloading:

player.preloadConfiguration = PreloadConfiguration.DEFAULT

New IAMF decoder and Kotlin listener extension

The 1.5.0 release includes a new media3-decoder-iamf module, which allows playback of IAMF immersive audio tracks in MP4 files. Apps wanting to try this out will need to build the libiamf decoder locally. See the media3 README for full instructions.

implementation ("androidx.media3:media3-decoder-iamf:1.5.0")

This release also includes a new media3-common-ktx module, a home for Kotlin-specific functionality. The first version of this module contains a suspend function that lets the caller listen to Player.Listener.onEvents. This is a building block that’s used by the upcoming media3-ui-compose module (launching with media3 1.6.0) to power a Jetpack Compose playback UI.

implementation ("androidx.media3:media3-common-ktx:1.5.0")

Easier Player customization via delegation

Media3 has provided a ForwardingPlayer implementation since version 1.0.0, and we have previously suggested that apps should use it when they want to customize the way certain Player operations work, by using the decorator pattern. One very common use-case is to allow or disallow certain player commands (in order to show/hide certain buttons in a UI). Unfortunately, doing this correctly with ForwardingPlayer is surprisingly hard and error-prone, because you have to consistently override multiple methods, and handle the listener as well. The example code to demonstrate how fiddly this is too long for this blog, so we’ve put it in a gist instead.

In order to make these sorts of customizations easier, 1.5.0 includes a new ForwardingSimpleBasePlayer, which builds on the consistency guarantees provided by SimpleBasePlayer to make it easier to create consistent Player implementations following the decorator pattern. The same command-modifying Player is now much simpler to implement:

class PlayerWithoutSeekToNext(player: Player) : ForwardingSimpleBasePlayer(player) {
  override fun getState(): State {
    val state = super.getState()
    return state
      .buildUpon()
      .setAvailableCommands(
        state.availableCommands.buildUpon().remove(COMMAND_SEEK_TO_NEXT).build()
      )
      .build()
  }

  // We don't need to override handleSeek, because it is guaranteed not to be called for
  // COMMAND_SEEK_TO_NEXT since we've marked that command unavailable.
}

MediaSession: Command button for media items

Command buttons for media items allow a session app to declare commands supported by certain media items that then can be conveniently displayed and executed by a MediaController or MediaBrowser:

image of command buttons for media items in the Media Center of android Automotive OS
Screenshot: Command buttons for media items in the Media Center of Android Automotive OS.

You'll find the detailed documentation on android.developer.com.

This is the Media3 equivalent of the legacy “custom browse actions” API, with which Media3 is fully interoperable. Unlike the legacy API, command buttons for media items do not require a MediaLibraryService but are a feature of the Media3 MediaSession instead. Hence they are available for MediaController and MediaBrowser in the same way.


If you encounter any issues, have feature requests, or want to share feedback, please let us know using the Media3 issue tracker on GitHub. We look forward to hearing from you!


This blog post is a part of Camera and Media Spotlight Week. We're providing resources – blog posts, videos, sample code, and more – all designed to help you uplevel the media experiences in your app.

To learn more about what Spotlight Week has to offer and how it can benefit you, be sure to read our overview blog post.

Use ad inspector to debug your mobile applications

Ad inspector is an in-app overlay that enables authorized devices to perform real-time analysis of Google Mobile Ads SDK test ad requests directly within your mobile app. It is included with the Google Mobile Ads SDK and you can enable it with no coding required.

Ad inspector empowers you to thoroughly test all your ad sources before releasing those changes to your users so you can verify everything is working properly. To help you understand and utilize ad inspector effectively, we published a 7-part ad inspector video series on our Google AdMob YouTube channel.

Each video focuses on a specific challenge in testing your ad integration, offering in-depth tutorials and demonstrations on how to:

Check out our ad inspector documentation (Android, iOS, Unity, Flutter) to learn more. If you have questions, comments, or general feedback about ad inspector, contact us in the developer forum. And remember to subscribe to our Google AdMob YouTube channel for more technical content.

Use ad inspector to debug your mobile applications

Ad inspector is an in-app overlay that enables authorized devices to perform real-time analysis of Google Mobile Ads SDK test ad requests directly within your mobile app. It is included with the Google Mobile Ads SDK and you can enable it with no coding required.

Ad inspector empowers you to thoroughly test all your ad sources before releasing those changes to your users so you can verify everything is working properly. To help you understand and utilize ad inspector effectively, we published a 7-part ad inspector video series on our Google AdMob YouTube channel.

Each video focuses on a specific challenge in testing your ad integration, offering in-depth tutorials and demonstrations on how to:

Check out our ad inspector documentation (Android, iOS, Unity, Flutter) to learn more. If you have questions, comments, or general feedback about ad inspector, contact us in the developer forum. And remember to subscribe to our Google AdMob YouTube channel for more technical content.

Celebrating Another Year of #WeArePlay

Posted by Robbie McLachlan – Developer Marketing

This year #WeArePlay took us on a journey across the globe, spotlighting 300 people behind apps and games on Google Play. From a founder whose app uses AI to assist visually impaired people to a game where nimble-fingered players slice flying fruits and use special combos to beat their own high score, we met founders transforming ideas into thriving businesses.

Let’s start by taking a look back at the people featured in our global film series. From a mother and son duo preserving African languages, to a founder whose app helps kids become published authors - check out the full playlist.


We also continued our global tour around the world with:

And we released global collections of 36 stories, each with a theme reflecting the diversity of the app and game community on Google Play, including:


To the global community of app and game founders, thank you for sharing your inspiring journey. As we enter 2025, we look forward to discovering even more stories of the people behind games and apps businesses on Google Play.



How useful did you find this blog post?

The Second Developer Preview of Android 16

Posted by Matthew McCullough – VP of Product Management, Android Developer


The second developer preview of Android 16 is now available to test with your apps. This build includes changes designed to enhance the app experience, improve battery life, and boost performance while minimizing incompatibilities, and your feedback is critical in helping us understand the full impact of this work.

System triggered profiling

ProfilingManager was added in Android 15, giving apps the ability to request profiling data collection using Perfetto on public devices in the field. To help capture challenging trace scenarios such as startups or ANRs, ProfilingManager now includes System Triggered Profiling. Apps can use ProfilingManager#addProfilingTriggers() to register interest in receiving information about these flows. Flows covered in this release include onFullyDrawn for activity based cold starts, and ANRs.

val anrTrigger = ProfilingTrigger.Builder(
                ProfilingTrigger.TRIGGER_TYPE_ANR
            )
                .setRateLimitingPeriodHours(1)
                .build()

val startupTrigger: ProfilingTrigger =  //...

mProfilingManager.addProfilingTriggers(listOf(anrTrigger, startupTrigger))

Start component in ApplicationStartInfo

ApplicationStartInfo was added in Android 15, allowing an app to see reasons for process start, start type, start times, throttling, and other useful diagnostic data. Android 16 adds getStartComponent() to distinguish what component type triggered the start, which can be helpful for optimizing the startup flow of your app.

Richer Haptics

Android has exposed limited control over the haptic actuator since its inception.

Android 11 added support for more complex haptic effects that more advanced actuators can support through VibrationEffect.Compositions of device-defined semantic primitives.

Android 16 adds haptic APIs that let apps define the amplitude and frequency curves of a haptic effect while abstracting away differences between device capabilities.

Better job introspection

Android 16 introduces JobScheduler#getPendingJobReasons(int jobId) which can return multiple reasons why a job is pending, due to both explicit constraints set by the developer and implicit constraints set by the system.

We're also introducing JobScheduler#getPendingJobReasonsHistory(int jobId), which returns a list of the most recent constraint changes.

The API can help you debug why your jobs may not be executing, especially if you're seeing reduced success rates with certain tasks or latency issues with job completion as well. This can also better help you understand if certain jobs are not completing due to system defined constraints versus explicitly set constraints.

Adaptive refresh rate

Adaptive refresh rate (ARR), introduced in Android 15, enables the display refresh rate on supported hardware to adapt to the content frame rate using discrete VSync steps. This reduces power consumption while eliminating the need for potentially jank-inducing mode-switching.

Android 16 DP2 introduces hasArrSupport() and getSuggestedFrameRate(int) while restoring getSupportedRefreshRates() to make it easier for your apps to take advantage of ARR.

RecyclerView 1.4 internally supports ARR when it is settling from a fling or smooth scroll, and we're continuing our work to add ARR support into more Jetpack libraries. This frame rate article covers many of the APIs you can use to set the frame rate so that your app can directly leverage ARR.

Job execution optimizations

Starting in Android 16, we're adjusting regular and expedited job execution runtime quota based on the following factors:

    • Which app standby bucket the application is in; active standby buckets will be given a generous runtime quota.
    • Jobs started while the app is visible to the user and continues after the app becomes invisible will adhere to the job runtime quota.
    • Jobs that are executing concurrently with a foreground service will adhere to the job runtime quota. If you need to perform a data transfer that may take a long time consider using a user initiated data transfer.
Note: To understand how to further debug and test the behavior change, read more about JobScheduler quota optimizations.

Fully deprecating JobInfo#setImportantWhileForeground

The JobInfo.Builder#setImportantWhileForeground(boolean) method indicates the importance of a job while the scheduling app is in the foreground or when temporarily exempted from background restrictions.

This method has been deprecated since Android 12 (API 31). Starting in Android 16, it will no longer function effectively and calling this method will be ignored.

This removal of functionality also applies to JobInfo#isImportantWhileForeground(). Starting in Android 16, if the method is called, the method will return false.

Deprecated Disruptive Accessibility Announcements

Android 16 DP2 deprecates accessibility announcements, characterized by the use of announceForAccessibility or the dispatch of TYPE_ANNOUNCEMENT AccessibilityEvents. They can create inconsistent user experiences for users of TalkBack and Android's screen reader, and alternatives better serve a broader range of user needs across a variety of Android's assistive technologies.

Examples of alternatives:

The deprecated announceForAccessibility API includes more detail on suggested alternatives.

Cloud search in photo picker

The photo picker provides a safe, built-in way for users to grant your app access to selected images and videos from both local and cloud storage, instead of their entire media library. Using a combination of Modular System Components through Google System Updates and Google Play services, it's supported back to Android 4.4 (API level 19). Integration requires just a few lines of code with the associated Android Jetpack library.

The developer preview includes new APIs to enable searching from the cloud media provider for the Android photo picker. Search functionality in the photo picker is coming soon.

Ranging with enhanced security

Android 16 adds support for robust security features in WiFi location on supported devices with WiFi 6's 802.11az, allowing apps to combine the higher accuracy, greater scalability, and dynamic scheduling of the protocol with security enhancements including AES-256-based encryption and protection against MITM attacks. This allows it to be used more safely in proximity use cases, such as unlocking a laptop or a vehicle door. 802.11az is integrated with the Wi-Fi 6 standard, leveraging its infrastructure and capabilities for wider adoption and easier deployment.

Health Connect updates

Health Connect in the developer preview adds ACTIVITY_INTENSITY, a new datatype defined according to World Health Organization guidelines around moderate and vigorous activity. Each record requires the start time, the end time and whether the activity intensity is moderate or vigorous.

Health Connect also contains updated APIs supporting health records. This allows apps to read and write medical records in FHIR format with explicit user consent. This API is currently in an early access program. Sign up if you'd like to be part of our early access program.

Predictive back additions

Android 16 adds new APIs to help you enable predictive back system animations in gesture navigation such as the back-to-home animation. Registering the onBackInvokedCallback with the new PRIORITY_SYSTEM_NAVIGATION_OBSERVER allows your app to receive the regular onBackInvoked call whenever the system handles a back navigation without impacting the normal back navigation flow.

Android 16 additionally adds the finishAndRemoveTaskCallback() and moveTaskToBackCallback(). By registering these callbacks with the OnBackInvokedDispatcher, the system can trigger specific behaviors and play corresponding ahead-of-time animations when the back gesture is invoked.

Two Android API releases in 2025

This preview is for the next major release of Android with a planned launch in Q2 of 2025 and we plan to have another release with new developer APIs in Q4. The Q2 major release will be the only release in 2025 to include planned behavior changes that could affect apps. The Q4 minor release will pick up feature updates, optimizations, and bug fixes; it will not include any app-impacting behavior changes.

2025 SDK release timeline showing a features only update in Q1 and Q3, a major SDK release with behavior changes, APIs, and features in Q2, and a minor SDK release with APIs and features in Q4

We'll continue to have quarterly Android releases. The Q1 and Q3 updates in-between the API releases will provide incremental updates to help ensure continuous quality. We’re actively working with our device partners to bring the Q2 release to as many devices as possible.

There’s no change to the target API level requirements and the associated dates for apps in Google Play; our plans are for one annual requirement each year, and that will be tied to the major API level.

How to get ready

In addition to performing compatibility testing on the next major release, make sure that you're compiling your apps against the new SDK, and use the compatibility framework to enable targetSdkVersion-gated behavior changes as they become available for early testing.

App compatibility

The Android 16 production timeline shows the release stages, highlighting 'Beta Releases' and 'Platform Stability' in blue and green, respectively, from December to the final release.

The Android 16 Preview program runs from November 2024 until the final public release next year. At key development milestones, we'll deliver updates for your development and testing environments. Each update includes SDK tools, system images, emulators, API reference, and API diffs. We'll highlight critical APIs as they are ready to test in the preview program in blogs and on the Android 16 developer website.

We’re targeting Late Q1 of 2025 for our Platform Stability milestone. At this milestone, we’ll deliver final SDK/NDK APIs and also final internal APIs and app-facing system behaviors. We’re expecting to reach Platform Stability in March 2025, and from that time you’ll have several months before the official release to do your final testing. Learn more in the release timeline details.

Get started with Android 16

You can get started today with Developer Preview 2 by flashing a system image and updating the tools. If you are currently on Developer Preview 1, you will automatically get an over-the-air update to Developer Preview 2. We're looking for your feedback so please report issues and submit feature requests on the feedback page. The earlier we get your feedback, the more we can include in the final release.

For the best development experience with Android 16, we recommend that you use the latest preview of the Android Studio Ladybug feature drop. Once you’re set up, here are some of the things you should do:

    • Compile against the new SDK, test in CI environments, and report any issues in our tracker on the feedback page.
    • Test your current app for compatibility, learn whether your app is affected by changes in Android 16, and install your app onto a device or emulator running Android 16 and extensively test it.

We’ll update the preview system images and SDK regularly throughout the Android 16 release cycle. This preview release is for developers only and not intended for daily consumer use. We're making it available by manual download. Once you’ve manually installed a preview build, you’ll automatically get future updates over-the-air for all later previews and Betas.

If you've already installed Android 15 QPR Beta 2 and would like to flash Android 16 Developer Preview 2, you can do so without first having to wipe your device.

As we reach our Beta releases, we'll be inviting consumers to try Android 16 as well, and we'll open up enrollment for Android 16 in the Android Beta program at that time.

For complete information, visit the Android 16 developer site.