Category Archives: Android Developers Blog

An Open Handset Alliance Project

Spotlight Week: Design and Develop Widgets

Posted by Ash Nohe and Summers Pitman – Developer Relations Engineers

We’re kicking off the next edition in our Spotlight Week series! This week, we'll be diving deep into how to create high-quality widgets that boost user engagement and improve discoverability.

We've heard your feedback: you want your widgets to be easily discoverable. To address this, we’re excited to share that Google Play is introducing a new search filter specifically for apps with high-quality widgets. By equipping you with the knowledge and tools to ensure your widgets shine, we aim to demonstrate how widgets can be a crucial element in building delightful, helpful, and performant widgets that keep your users engaged. Learn more about Google Play’s widget discovery features.



Here’s what we’re covering this week in our Spotlight Week on Widgets:

Why Widgets?

Monday, March 3rd

We’re kicking off the week with an overview of why widgets are essential for today's users. Learn how you can level up your app with Widgets and get inspired by these best-in-class examples. Plus, learn how Google Play is improving widget discovery through a dedicated search filter, new app detail page badges, and other enhancements designed to increase user interaction.

Design great widgets with Figma and Canonical Layouts

Tuesday, March 4th

Learn how to visualize your content in widget layouts and create high quality widgets with a new Figma resource, hands-on lab and blog with the Canonical Layouts. Learn from SoundCloud's experience: a case study showcasing impactful widget implementation.

Develop best practice widgets with Glance

Wednesday, March 5th

Follow our code-along video to learn practical widget update techniques using Canonical Layouts.

#AskAndroid

Thursday, March 6th

Get your widget questions answered in #AskAndroid, and dive into lockscreen widgets in our FAQ.


That's a week packed with widget insights! This blog post serves as your central hub for updates, with links added regularly throughout the week. Get even more widgets content and insights by following Android Developers on X, and Android by Google at Linkedin.


Resources

Level Up Your App: Why Android Widgets are a Game-Changer

Posted by André Labonté – Senior Product Manager, Android Widgets

If you're an Android app developer and you're looking to boost your app's visibility, and engagement, you should definitely consider adding widgets. These small but mighty UI elements can have a significant impact on your app's success.

A widget is basically a UI that lives outside your main app. Widgets act like a window into your app content and a shortcut to your core features, which users can conveniently engage with right from their home screen, lock screen, or even through digital assistants.

Why Widgets are Awesome for Your App:

    • More Visibility: Widgets put your brand and key features front and center on the user's device, so they're more likely to see it.
    • Better User Engagement: By giving users quick access to important features, widgets encourage them to use your app more often.
    • Increased Conversions: You can use widgets to recommend personalized content or promote premium features, which could lead to more conversions.
    • Happier Users Who Stick Around: Easy access to app content and features through widgets can lead to overall better user experience, and contribute to retention.

Understanding What Users Want: Key to Good Widget Design

People use widgets for different reasons. Understanding these motivations is crucial for designing widgets that resonate.

    • Customization: Users like to personalize their home screens. Think about how your app's content can help them do that.
    • Efficiency: Widgets give users quick access to the features they use a lot, which saves them time and effort. If your app has features that users would find handy to access right from their home screen, think about putting them in a widget.
    • Quick Info: Some widgets are great for giving users essential info at a glance. If users often open your app for quick updates, a glanceable widget.

Building Awesome Widgets: Tips for Developers

Here's how to make widgets that users will love:

    • Focus on Value: Make sure your widget does something useful for users without them having to open the app.
    • Keep it Simple: Design widgets that are easy to use and understand.
    • Make it Adaptable: Test your widgets on different Android devices (phones, tablets, foldables) to make sure they work well on all of them.
    • Match the Look: Design widgets that fit in with the system's overall look by using system colors, fonts, and corner shapes.
    • Make it Easy to Find: Use the widget pinning API to encourage users to add your widget from within your app. Give them good previews and descriptions so they know what it's all about.

Get Inspired and Start Building

We encourage you to integrate widgets into your Android app strategy. For inspiration and guidance, explore our new Widget design gallery, featuring Canonical Widget Layouts.

We can't wait to see the awesome widgets you come up with!


This blog post is part of our series: Spotlight Week on Widgets, where we provide resources—blog posts, videos, sample code, and more—all designed to help you design and create widgets. You can read more in the overview of Spotlight Week: Widgets, which will be updated throughout the week.

Google Play enhances widget discovery to drive engagement with your app

Posted by Yinka Taiwo-Peters – Product Manager

Android developers, we've heard you. Historically, one of the challenges with investing in widget development has been discoverability and user understanding. You've asked for better ways for users to find and utilize your widgets, and we're delivering. Google Play now offers significant enhancements to widget discovery, creating a prime opportunity to re-engage with your users on a deeper level.

We understand that the effort required to build and maintain widgets needs to be justified by user adoption, that’s why we’ve designed these key improvements, which are coming soon to Google Play on Android phones, tablets and foldables:

    • Dedicated Widgets Search Filter: Users can now directly search for apps with widgets using a dedicated filter on Google Play. This means your apps/games with widgets will be easily identified, helping drive targeted downloads and engagement.
    • New Widget Badges on App Detail Pages: We’ve introduced a visual badge on your app’s detail pages to clearly indicate the presence of widgets. This eliminates guesswork for users and highlights your widget offerings, encouraging them to explore and utilize this capability.
    • Curated Widgets Editorial Page: We're actively educating users on the value of widgets through a new editorial page. This curated space showcases collections of excellent widgets and promotes the apps that leverage them. This provides an additional channel for your widgets to gain visibility and reach a wider audience.
Three side-by-side displays of using the widget filter in Google Play Store
click to enlarge

What this means for you:

    • Increased User Engagement: Enhanced discoverability may translate to more users finding and using your widgets, leading to increased app engagement and user retention.
    • New Opportunities for User Interaction: Widgets offer a unique way to provide value and interact with users on their home screens, fostering a deeper connection with your app.
    • Renewed Investment Justification: The improved discoverability features make widget development a more viable and rewarding investment.

We encourage you to revisit your app strategy and consider the potential of widgets. With these new discovery tools, Google Play is making it easier than ever for users to find and love your widgets. Now is the time to leverage the power of widgets and enhance your Android app experience.


This blog post is part of our series: Spotlight Week on Widgets, where we provide resources—blog posts, videos, sample code, and more—all designed to help you design and create widgets. You can read more in the overview of Spotlight Week: Widgets, which will be updated throughout the week.

Meet the Android Studio Team: A Conversation with Android Developer UX Manager, Dan Dole

Posted by Ashley Tschudin – Social Media Specialist, MTP at Google

Welcome to "Meet the Android Studio Team"! In this blog series, we introduce you to the passionate people who create the Android development tools you use every day. Get to know the engineers, designers, product managers, and more who work hard to craft the best possible experience for Android developers, and explore their unique perspectives.


Dan Dole: Building Android Studio for You

Meet Dan Dole, a UX Manager for Android Developer UX, who offers a unique perspective on the Android development journey. He highlights the passion and talent within the Android Developer team, emphasizing the importance of elegant solutions and efficient experiences for developers.

Dan also delves into the exciting potential of AI and machine learning to transform Android development, foreseeing a future where AI accelerates learning, refines code, and empowers developers to focus on innovation.

Through his insights, Dan underscores the collaborative spirit and unwavering commitment to developer success that defines the Android Developer Experience.

Can you tell us about your journey to becoming a part of the Android Studio team? What sparked your interest in Android development?

My journey with Android Development and the Android Studio team started with a conversation with a former colleague and the product lead for Android Developer. She was a leader I respected as someone who was passionate about developers, and believed that UX was a critical component of product development. After meeting with her and understanding the direction of Android, I was convinced that Android could be not just an outstanding mobile platform but a platform that spanned devices, and this was an organization that was focused on enabling developers to bring their talents and creativity to billions of users. Each year, I see us advancing in that direction and feel more confident in my choice to be part of the Android Developer team.

This question can’t be answered without mentioning that the people working on Android Developer tools and APIs are some of the most passionate and talented people I have ever worked with.

What are some of the biggest challenges you've faced in your career as a developer, and how have those experiences shaped your approach to your job?

I am a UX professional in a highly technical environment. This has been the case for about two decades. One of the challenges I have faced is articulating the value of elegant solutions for developers.

This is partially because developers are very capable and resourceful. Clearly, they are tolerant and they will overcome issues that average users won’t. Prior to joining Android Developer Experience, I would have to create processes and negotiate quality bars to drive quality and build efficient experiences.

This challenge gave me skill in release management and how to understand some complexities unique to this space, but it also gave me tools to help explain that developers may be able to manage complexity better than most. Developers appreciate refinement, productivity, and quality, as much as they appreciate flexibility and capability.

How has the integration of AI and machine learning impacted Android developer capabilities, and how do you see it evolving in the future?

We are in the very early stages of AI and its ability to impact developers. As we learn how to be transparent and give developers control over how an AI can benefit them, we are seeing an immediate impact on accelerating learning and refining code.

I expect AI to remove the “chores” that developers have to do, creating more space for them to be productive. I also expect AI to evolve from generating artifacts to generating actions. Making AI features more proactive and allowing developers to more quickly adjust to users' needs.

How does the Android Studio team ensure that products or features meet the ever-changing needs of developers?

I lead our Android Developer research and design team. We spent countless hours listening to developers, evaluating feedback, and understanding technology investments. We approach these conversations and instruments by evaluating what we have already delivered, looking and listening to the challenges developers face, and designing and evaluating new approaches.

The Android Developer team (ENG, Product, UX and Test) are motivated by supporting developers, so all developer feedback is received with gratitude and influences all our investments.

What advice would you give to aspiring Android developers who are just starting their journey?

Android is a vibrant and welcoming community, so my advice would be to engage the community. It is where we learn, inspire and grow together. I have heard many Android developers talk about the pride they have working on this platform and the conviction they have in it being the best platform to work on. I feel like this is unique to Android, the platform isn’t a means to an end, it’s an identity and value system. Android is a community of amazing people, get involved.

Make Gemini in Android Studio Your Coding Companion

Embrace Dan's vision for the future of Android development and explore the latest AI advancements in Android Studio. Features like AI-powered code generation and refactoring tools empower you to develop higher-quality apps with greater efficiency.

Stay tuned!

Want to meet more of the Android Studio team? Stay tuned for future installments of this series, where we'll introduce you to new faces and share their unique insights.

Find Dan Dole on LinkedIn.

Meet the Android Studio Team: A Conversation with Engineering Director, Tor Norbye

Posted by Ashley Tschudin – Social Media Specialist, MTP at Google

Welcome to "Meet the Android Studio Team," our new ongoing blog series. Each week, we'll introduce you to the talented people behind Android Studio. Get to know the engineers, designers, product managers, and more who create the best possible experience for Android developers like you. Join us and explore their unique perspectives.


Tor Norbye: Building Android Studio for You

Trevor Johns, Staff Developer Programs Engineer

Meet Tor Norbye, an Engineering Director at Google leading the development of Android Studio.

From his early days of coding to leading the charge on AI-powered development tools, Tor shares his insights on the evolution of Android and the vital role Android Studio plays in its future.

We'll delve into the challenges of creating developer tools, the importance of community feedback, and how Google strives to empower developers worldwide.


Can you tell us about your journey to becoming a part of the Android Studio team? What sparked your interest in Android development?

I grew up in Norway and I was fascinated by programming; my first exposure was as a middle schooler reading program listings in magazines (yes, in the early 80s, monthly computer magazines would include source code!) and in 1983 I got my hands on a microcomputer, and knew immediately that's what I wanted to do as a career. And now, 40+ years later, I still love programming. It's not my day-job anymore, but I still write bits and pieces of code for Android Studio on the shuttle and during quiet periods.

I've worked on developer tools my whole career - first, 14 years at Sun Microsystems after college. In 2010 I got increasingly interested in the rise of mobile computing and really wanted to be part of it, so I joined the Android team, and I've been here since.

Back then there was no "Android Studio". At the time we were working on Eclipse-based tooling for Android development. But we all knew that IntelliJ was the gold-standard for Java development, so a couple years later we began the work on building Android Studio on top of IntelliJ and with various new and ported code from our Eclipse plugins. I then had the honor of doing the unveiling demo at Google I/O in 2013.

How has the integration of AI and machine learning impacted Android developer capabilities, and how do you see it evolving in the future?

The integration of artificial intelligence has absolutely impacted Android developer capabilities, and this is just the beginning.

I felt very fortunate to be part of bringing about the massive shift from desktop computing to mobile computing when I joined Android, and I can't believe I get to be in the middle of a second massive industry shift as well, with AI and large language models.

I actually spend a lot of my time on this, working with Studio engineers, UX and product managers on our various AI related features, and talking to partner AI teams at Google. We've made a huge amount of progress in the last couple of years, both on the Studio feature integration side, as well as Google-wide on the AI side. While there is some skepticism that we're just doing AI features for AI's sake, I don't see it that way. With AI, we can suddenly, with relatively low effort, build useful features not previously possible.

Here's a very simple example from the latest Studio version: When you invoke the Rename refactoring feature, we use Gemini to add additional naming suggestions into the name popup based on what your code is doing. Here we're helping you pick good names – and naming is famously one of the two hardest problems in computer science – naming, cache invalidation and off-by-one errors. Yet LLMs are good at this – so coupled with the safe refactoring machinery in the IDE, we were able to safely add a useful feature with relatively low engineering cost on the IDE side (of course, this is building on top of a massive investment from Google over on the Gemini side).

The field is moving incredibly quickly, so it's hard to predict where things are going, but we're actively working in several areas, making the AI more aware of your codebase, and making it handle larger, complex tasks via AI Agents, and so much more.

What are some of the biggest challenges you've faced in your career as a developer, and how have those experiences shaped your approach to your job?

Earlier in my career, at a different company, we had big annual releases. I took a lot of pride in my productivity, and as my responsibilities grew, I'd try to do the impossible and deliver, no matter what. I'd not only work long hours, but I'd also try to work as quickly as I can. This led to a lot of stress. I remember putting my (at the time) young children to bed and impatiently waiting for them to fall asleep such that I could head back out to the garage office and start the evening coding shift. And I knew that stress isn't healthy, so I'd also stress about being stressed! This obviously wasn't sustainable.

Now, I emphasize work life balance not only for myself, but also for our team. I want to make sure our work is sustainable, and that people can thrive and be in it for the long term. It's a marathon, not a sprint.

Can you share an example of how feedback from the developer community has directly influenced a feature or improvement?

We have a number of feedback channels; the most important one is the Android Studio issue tracker.

We still have a very large backlog of bugs, so it's easy to get the impression that we're ignoring user reports, but that's not true. As a team, we've actually fixed several thousand bugs in 2024 alone. The best bugs are those that are clear and actionable, ideally with steps to reproduce.

I'm also very thankful to everyone who turns on data sharing in Studio; if you don't already, please consider it! Our analytics is more of an indirect, but still vital, feedback channel from the community. In addition to collecting information on, for example, which menu items are clicked, we also use it to collect quality metrics on system health. For instance, when we detect that the UI is lagging (such as a 1+ second freeze in the UI thread), we grab a thread dump and send it to the server, then aggregate these into a dashboard where we can see top freeze spots in the IDE across the user population, and can focus our efforts on fixing those.

How does the Studio team contribute to Google's broader vision for the Android platform?

In Android Studio we're always making sure we support the latest technologies and recommendations from Android, Firebase, Material, and other Google technologies. That way, it's easier for developers to adopt recommendations, like using Kotlin, Coroutines, Compose, Material, and so on.

Explore the Power of AI

Unlock the full potential of AI in your Android development journey. Explore the latest advancements in Android Studio, including intelligent code completion, automated refactoring, and other AI-driven tools.

Stay tuned!

Don't miss our next and final installment in the "Meet the Android Studio Team" series; we'll feature one more talented team member and share their unique perspective. Stay tuned to learn more about the amazing people behind Android Studio.

Find Tor Norbye on Bluesky.

The Second Beta of Android 16

Posted by Matthew McCullough – VP of Product Management, Android Developer

Today we're releasing the second beta of Android 16, continuing our work to build a platform that enables creative expression. You can enroll any supported Pixel device to get this and future Android Beta updates over-the-air.

This build adds new support for professional camera experiences, graphical effects, extends our performance framework, and continues the evolution of features related to privacy, security, and background tasks. We’re looking forward to hearing what you think, and thank you in advance for your continued help in making Android a platform that works for everyone.

Media and camera updates

Android 16 enhances support for professional camera users, allowing for hybrid auto exposure along with precise color temperature and tint adjustments. It's easier than ever to capture motion photos with new Intent actions, and we're continuing to improve UltraHDR images, with support for HEIC encoding and new parameters from the ISO 21496-1 draft standard.

Hybrid auto-exposure

Android 16 adds new hybrid auto-exposure modes to Camera2, allowing you to manually control specific aspects of exposure while letting the auto-exposure (AE) algorithm handle the rest. You can control ISO + AE, and exposure time + AE, providing greater flexibility compared to the current approach where you either have full manual control or rely entirely on auto-exposure.

fun setISOPriority() {
   // ...

    val availablePriorityModes = mStaticInfo.characteristics.get(
        CameraCharacteristics.CONTROL_AE_AVAILABLE_PRIORITY_MODES
    )
    // ...
    
    // Turn on AE mode to set priority mode
    reqBuilder[CaptureRequest.CONTROL_AE_MODE] = CameraMetadata.CONTROL_AE_MODE_ON
    reqBuilder[CaptureRequest.CONTROL_AE_PRIORITY_MODE] = CameraMetadata.CONTROL_AE_PRIORITY_MODE_SENSOR_SENSITIVITY_PRIORITY
    reqBuilder[CaptureRequest.SENSOR_SENSITIVITY] = TEST_SENSITIVITY_VALUE
    val request: CaptureRequest = reqBuilder.build()

    // ...

}

Precise color temperature and tint adjustments

Android 16 adds camera support for fine color temperature and tint adjustments to better support professional video recording applications. White balance settings are currently controlled through CONTROL_AWB_MODE, which contains options limited to a preset list, such as Incandescent, Cloudy, and Twilight. The COLOR_CORRECTION_MODE_CCT enables the use of COLOR_CORRECTION_COLOR_TEMPERATURE and COLOR_CORRECTION_COLOR_TINT for precise adjustments of white balance based on the correlated color temperature.

fun setCCT() {
    // ... (Your existing code before this point) ...

    val colorTemperatureRange: Range<Int> =
        mStaticInfo.characteristics[CameraCharacteristics.COLOR_CORRECTION_COLOR_TEMPERATURE_RANGE]

    // Set to manual mode to enable CCT mode
    reqBuilder[CaptureRequest.CONTROL_AWB_MODE] = CameraMetadata.CONTROL_AWB_MODE_OFF
    reqBuilder[CaptureRequest.COLOR_CORRECTION_MODE] = CameraMetadata.COLOR_CORRECTION_MODE_CCT
    reqBuilder[CaptureRequest.COLOR_CORRECTION_COLOR_TEMPERATURE] = 5000
    reqBuilder[CaptureRequest.COLOR_CORRECTION_COLOR_TINT] = 30

    val request: CaptureRequest = reqBuilder.build()

    // ... (Your existing code after this point) ...
}
Five photos of the back of a Google Pixel phone demonstrate different color temperatures and tints. The original photo is in the top left, followed by Tint -50, Tint +50, Temp 3000, and Temp 7000.

Motion photo capture intent actions

Android 16 adds standard Intent actions — ACTION_MOTION_PHOTO_CAPTURE, and ACTION_MOTION_PHOTO_CAPTURE_SECURE — which request that the camera application capture a motion photo and return it.

Moving image of a diverse group of friends playing a game of horseshoe

You must either pass an extra EXTRA_OUTPUT to control where the image will be written, or a Uri through Intent setClipData. If you don't set a ClipData, it will be copied there for you when calling Context.startActivity.

UltraHDR image enhancements

A split-screen image compares Standard Dynamic Range (SDR) and High Dynamic Range (HDR) image quality side-by-side using a singular image of a detailed landscape. The HDR side is more vivid and vibrant.

Android 16 continues our work to deliver dazzling image quality with UltraHDR images. It adds support for UltraHDR images in the HEIC file format. These images will get ImageFormat type HEIC_ULTRAHDR and will contain an embedded gainmap similar to the existing UltraHDR JPEG format. We're working on AVIF support for UltraHDR as well, so stay tuned.

In addition, Android 16 implements additional parameters in UltraHDR from the ISO 21496-1 draft standard, including the ability to get and set the colorspace that gainmap math should be applied in, as well as support for HDR encoded base images with SDR gainmaps.

Custom graphical effects with AGSL

Android 16 adds RuntimeColorFilter and RuntimeXfermode, allowing you to author complex effects like Threshold, Sepia, and Hue Saturation and apply them to draw calls. Since Android 13, you've been able to use AGSL to create custom RuntimeShaders that extend Shaders. The new API mirrors this, adding an AGSL-powered RuntimeColorFilter that extends ColorFilters, and a Xfermode effect that allows you to implement AGSL-based custom compositing and blending between source and destination pixels.

private val thresholdEffectString = """
    uniform half threshold;
    half4 main(half4 c) {
        half luminosity = dot(c.rgb, half3(0.2126, 0.7152, 0.0722));
        half bw = step(threshold, luminosity);
        return bw.xxx1 * c.a;
    }"""

fun setCustomColorFilter(paint: Paint) {
   val filter = RuntimeColorFilter(thresholdEffectString)
   filter.setFloatUniform(0.5)
   paint.colorFilter = filter
}

Behavior changes

With every Android release, we seek to make the platform more efficient, privacy conscious, internationalization friendly, and robust, balancing the needs of apps against hardware support, system performance, user privacy, and battery life. This can result in behavior changes that impact compatibility.

Edge to edge opt-out going away

Android 15 enforced edge-to-edge for apps targeting Android 15 (SDK 35), but your app could opt-out by setting R.attr#windowOptOutEdgeToEdgeEnforcement to true. Once your app targets Android 16 (Baklava), R.attr#windowOptOutEdgeToEdgeEnforcement is deprecated and disabled and your app cannot opt-out of going edge-to-edge. To be compatible with Android 16 Beta 2, ensure your app supports edge-to-edge and remove any use of R.attr#windowOptOutEdgeToEdgeEnforcement. To support edge-to-edge, see the Compose and Views guidance. Please let us know about concerns in our tracker on the feedback page.

Health and fitness permissions

For apps targeting Android 16 or higher, BODY_SENSORS permissions are transitioning to the granular permissions under android.permissions.health also used by Health Connect. Any API previously requiring BODY_SENSORS or BODY_SENSORS_BACKGROUND will now require the corresponding android.permissions.health permission. This affects the following data types, APIs, and foreground service types:

If your app uses these APIs, it should now request the respective granular permissions:

These permissions are the same as those that guard access to reading data from Health Connect, the Android datastore for health, fitness, and wellness data.

Abandoned empty jobs stop reason

An abandoned job occurs when the JobParameters object associated with the job has been garbage collected, but jobFinished has not been called to signal job completion. This indicates that the job may be running and being rescheduled without the application's awareness.

Applications in Android 16 that rely on JobScheduler without maintaining a strong reference to the JobParameters object will now be granted the new job stop reason STOP_REASON_TIMEOUT_ABANDONED on timeout, instead of STOP_REASON_TIMEOUT.

If there are frequent occurrences of the new abandoned stop reason, the system will take mitigation steps to reduce job frequency. Please use the new stop reason to detect and reduce abandoned jobs.

Note: If you're using WorkManager, you're not expected to be impacted by this change — one nice side effect of using Android Jetpack to schedule your work.

Intent redirect changes

Android 16 introduces default security hardening against Intent redirection attacks regardless of your app's targetSDK version. The removeLaunchSecurityProtection API allows you to opt-out of this protection if your testing reveals issues.

Note: Opting out of security protections should be done with caution and only when absolutely necessary, as it can increase the risk of security vulnerabilities.
val iSublevel = intent.getParcelableExtra("sub_intent", Intent::class.java)
iSublevel?.let {
    it.removeLaunchSecurityProtection()
    startActivity(it)
}

Elegant font APIs deprecated and disabled

Apps targeting Android 15 (API level 35) have the elegantTextHeight TextView attribute set to true by default, replacing the compact font with one that is much more readable. You could override this by setting the elegantTextHeight attribute to false.

Android 16 deprecates the elegantTextHeight attribute, and the attribute will be ignored once your app targets Android 16. The “UI fonts” controlled by these APIs are being discontinued, so you should adapt any layouts to ensure consistent and future proof text rendering in Arabic, Lao, Myanmar, Tamil, Gujarati, Kannada, Malayalam, Odia, Telugu or Thai.

Example of default eleganttextHeight behavior for apps targeting Android 14 (API level 34) and lower
default elegantTextHeight behavior for apps targeting Android 14 (API level 34) and lower

Example of default elegantTextHeight behavior for apps targeting Android 15 (API level 35) and higher
default elegantTextHeight behavior for apps targeting Android 15 (API level 35) and higher

16 KB page size compatibility mode

Android 15 introduced support for 16KB memory pages to optimize performance of the platform. Android 16 adds a compatibility mode, allowing some apps built for 4K memory pages to run on a device configured for 16KB memory pages.

If Android detects that your app has 4KB aligned memory pages, it will automatically use compatibility mode and display a notification dialog to the user. Setting the android:pageSizeCompat property in the AndroidManifest.xml to enable the backwards compatibility mode will prevent the display of the dialog when your app launches. For best performance, reliability, and stability, your app should still be 16KB aligned. Read our recent blog post about updating your apps to support 16KB memory pages for more details.

Screenshot of PageSizeCompatTestApp in Android 16

Measurement system customization

Users can now customize their measurement system in regional preferences within Settings. The user preference is included as part of the locale code, so you can register a BroadcastReceiver on ACTION_LOCALE_CHANGED to handle locale configuration changes when regional preferences change.

Using formatters can help match the local experience. For example, "0.5 in" in English (United States), is "12,7 mm" for a user who has set their phone to English (Denmark) or who uses their phone in English (United States) with the metric system as the measurement system preference.

To find these settings in Android 16 Beta 2, open the Settings app and navigate to System > Languages & region.

Content handling for live wallpapers

In Android 16, the live wallpaper framework is gaining a new content API to address the challenges of dynamic, user-driven wallpapers. Currently, live wallpapers incorporating user-provided content require complex, service-specific implementations. Android 16 introduces WallpaperDescription and WallpaperInstance. WallpaperDescription allows you to identify distinct instances of a live wallpaper from the same service. For example, a wallpaper that has instances on both the home screen and on the lock screen may have unique content in both places. The wallpaper picker and WallpaperManager use this metadata to better present wallpapers to users, streamlining the process for you to create diverse and personalized live wallpaper experiences.

Headroom APIs in ADPF

The SystemHealthManager introduces the getCpuHeadroom and getGpuHeadroom APIs, designed to provide games and resource-intensive apps with estimates of available CPU and GPU resources. These methods offer a way for you to gauge how your app or game can best improve system health, particularly when used in conjunction with other Android Dynamic Performance Framework (ADPF) APIs that detect thermal throttling. By using CpuHeadroomParams and GpuHeadroomParams on supported devices, you will be able to customize the time window used to compute the headroom and select between average or minimum resource availability. This can help you reduce your CPU or GPU resource usage accordingly, leading to better user experiences and improved battery life.

Key sharing API

Android 16 adds APIs that support sharing access to Android Keystore keys with other apps. The new KeyStoreManager class supports granting and revoking access to keys by app uid, and includes an API for apps to access shared keys.

Standardized picture and audio quality framework for TVs

The new MediaQuality package in Android 16 exposes a set of standardized APIs for access to audio and picture profiles and hardware-related settings. This allows streaming apps to query profiles and apply them to media dynamically:

    • Movies mastered with a wider dynamic range require greater color accuracy to see subtle details in shadows and adjust to ambient light, so a profile that prefers color accuracy over brightness may be appropriate.
    • Live sporting events are often mastered with a narrow dynamic range, but are often watched in daylight, so a profile that gives preference to brightness over color accuracy can give better results.
    • Fully interactive content wants minimal processing to reduce latency, and wants higher frame rates, which is why many TV's ship with a game profile.

The API allows apps to switch between profiles and users to enjoy the benefits of tuning supported TVs to best suit their content.

Accessibility

Android 16 adds additional APIs to enhance UI semantics that help improve consistency for users that rely on accessibility services, such as TalkBack.

Duration added to TtsSpan

Android 16 extends TtsSpan with a TYPE_DURATION, consisting of ARG_HOURS, ARG_MINUTES, and ARG_SECONDS. This allows you to directly annotate time duration, ensuring accurate and consistent text-to-speech output with services like TalkBack.

Support elements with multiple labels

Android currently allows UI elements to derive their accessibility label from another, and now offers the ability for multiple labels to be associated, a common scenario in web content. By introducing a list-based API within AccessibilityNodeInfo, Android can directly support these multi-label relationships. As part of this change, we've deprecated AccessibilityNodeInfo setLabeledBy and getLabeledBy in favor of addLabeledBy, removeLabeledBy, and getLabeledByList.

Improved support for expandable elements

Android 16 adds accessibility APIs that allow you to convey the expanded or collapsed state of interactive elements, such as menus and expandable lists. By setting the expanded state using setExpandedState and dispatching TYPE_WINDOW_CONTENT_CHANGED AccessibilityEvents with a CONTENT_CHANGE_TYPE_EXPANDED content change type, you can ensure that screen readers like TalkBack announce state changes, providing a more intuitive and inclusive user experience.

Indeterminate ProgressBars

Android 16 adds RANGE_TYPE_INDETERMINATE, giving a way for you to expose RangeInfo for both determinate and indeterminate ProgressBar widgets, allowing services like TalkBack to more consistently provide feedback for progress indicators.

Tri-state CheckBox

The new AccessibilityNodeInfo getChecked and setChecked(int) methods in Android 16 now support a "partially checked" state in addition to "checked" and "unchecked." This replaces the deprecated boolean isChecked and setChecked(boolean).

Two Android API releases in 2025

This preview is for the next major release of Android with a planned launch in Q2 of 2025 and we plan to have another release with new developer APIs in Q4. The Q2 major release will be the only release in 2025 to include behavior changes that could affect apps. The Q4 minor release will pick up feature updates, optimizations, and bug fixes; like our non-SDK quarterly releases, it will not include any intentional app-impacting behavior changes.

2025 SDK release timeline showing a features only update in Q1 and Q3, a major SDK release with behavior changes, APIs, and features in Q2, and a minor SDK release with APIs and features in Q4

We'll continue to have quarterly Android releases. The Q1 and Q3 updates provide incremental updates to ensure continuous quality. We’re putting additional energy into working with our device partners to bring the Q2 release to as many devices as possible.

There’s no change to the target API level requirements and the associated dates for apps in Google Play; our plans are for one annual requirement each year, tied to the major API level.

How to get ready

In addition to performing compatibility testing on this next major release, make sure that you're compiling your apps against the new SDK, and use the compatibility framework to enable targetSdkVersion-gated behavior changes as they become available for early testing.

App compatibility

The Android 16 production timeline shows the release stages, highlighting 'Beta Releases' and 'Platform Stability' in blue and green, respectively, from December to the final release.

The Android 16 Preview program runs from November 2024 until the final public release in Q2 of 2025. At key development milestones, we'll deliver updates for your development and testing environments. Each update includes SDK tools, system images, emulators, API reference, and API diffs. We'll highlight critical APIs as they are ready to test in the preview program in blogs and on the Android 16 developer website.

We’re targeting March of 2025 for our Platform Stability milestone. At this milestone, we’ll deliver final SDK/NDK APIs and also final internal APIs and app-facing system behaviors. From that time you’ll have several months before the final release to complete your testing. Learn more by checking the release timeline details.

Get started with Android 16

You can enroll any supported Pixel device to get this and future Android Beta updates over-the-air. If you don’t have a Pixel device, you can use the 64-bit system images with the Android Emulator in Android Studio. If you are currently on Android 16 Beta 1 or are already in the Android Beta program, you will be offered an over-the-air update to Beta 2.

We're looking for your feedback so please report issues and submit feature requests on the feedback page. The earlier we get your feedback, the more we can include in our work on the final release.

For the best development experience with Android 16, we recommend that you use the latest preview of Android Studio (Meerkat). Once you’re set up, here are some of the things you should do:

    • Compile against the new SDK, test in CI environments, and report any issues in our tracker on the feedback page.
    • Test your current app for compatibility, learn whether your app is affected by changes in Android 16, and install your app onto a device or emulator running Android 16 and extensively test it.

We’ll update the beta system images and SDK regularly throughout the Android 16 release cycle. Once you’ve installed a beta build, you’ll automatically get future updates over-the-air for all later previews and Betas.

For complete information, visit the Android 16 developer site.

Meet the Android Studio Team: A Conversation with Staff Developer Programs Engineer, Trevor Johns

Posted by Ashley Tschudin – Social Media Specialist, MTP at Google

Android Studio isn't just code and algorithms – it's built by real people with fascinating stories. Our "Meet the Android Studio Team" series gives you a glimpse into the lives and passions of the talented individuals who craft the tools you use every day. Tune in each month to meet new team members and discover their unique journey.


Trevor Johns: Building Android Studio for You

Trevor Johns, Staff Developer Programs Engineer

Meet Trevor Johns, a seasoned Staff Developer Programs Engineer at Google.

Reflecting on his journey, Trevor sheds light on the most impactful advancements in the Android ecosystem and offers a glimpse into his vision for the future where AI plays a pivotal role in streamlining development workflows.

Trevor discusses the Android Studio team's dedication to enhancing developer productivity through AI, highlighting their focus on understanding and addressing developer needs, and reflects on the dynamic journey of Android development while sharing valuable insights.


Can you tell us about your journey to becoming a part of the Android Studio team? What sparked your interest in Android development?

I've been at Google in various roles since Google since 2007, and transferred to Android team in 2009 shortly after the launch of the HTC G1 — the first publicly available Android phone. Even in those early days it was clear that mobile computing was a unique opportunity to reimagine many of the limitations of desktop computers and how users interact with the digital world.

Among my first projects were helping developers optimize their apps for the MyTouch 3G and Motorola Droid, as well as creating developer resources for Android's 1.6 Donut release.

Over the years, I've worked on various parts of the Android OS including our first tablet devices, Android Wear, helping develop the original Android support libraries (which later became Jetpack), and the migration to Kotlin.

Recently I joined the Android Studio team to help improve developer productivity, using AI to streamline common developer tasks and help developers have more time to focus on creativity.

How does the Android Studio team ensure that products or features meet the ever-changing needs of developers?

Like the rest of Android, we approach development of new features by listening to our developer community. We hold regular listening sessions with publishers, work with our UX research team to conduct case studies, and participate in online discussions to get a sense for where developers face the most friction — and then try to find ways to reduce that friction.

For example, we developed Gemini in Android Studio's integration with Play Vitals and Firebase Crashlytics based on feedback from members of the developer community who commented to let us know where they would find AI most useful across their developer workflow.

Speaking of, if you'd like to provide us with feedback, you can always file a bug or feature request on the Android Studio issue tracker.

How does the Studio team contribute to Google's broader vision for the Android platform?

In addition to listening to the Android community, we also keep an eye on what's being developed across the rest of the Android team and make sure that Android Studio has the right tools to help developers quickly migrate between Android versions and adopt those new platform features.

Beyond that, the Studio team provides leading edge editing tools to make sure that Android remains one of the easiest computing platforms to develop for — unlocking this unique computing platform for millions of developers.

In your opinion, what is the most impactful feature or improvement the Android team has introduced in recent years, and why?

For developers, my answer would have to be the migration to Kotlin. This language has modernized the Android developer experience — letting developers write apps with less code and fewer errors. It's also the foundation for Jetpack Compose, which is the future of Android UI development.

If you could wave a magic wand and add one dream feature to the Android universe, what would it be and why?

I'd love to see Gemini be able to not just autocomplete code for me, but generate scaffolds for new projects. That way I can focus on building features rather than worrying about basic structure when starting a new project.

Develop Android Apps with Kotlin

Follow Trevor's lead and embrace the power of Kotlin for modern Android development. Enhance your skills and write better Android apps faster with Kotlin.

Stay tuned!

Get ready for another inspiring story! The "Meet the Android Studio Team" series continues next week with a new team member in the spotlight. Don't miss their unique insights and journey.

Find Trevor Johns on LinkedIn, X, Bluesky, and Medium.

TrustedTime API: Introducing a reliable approach to time keeping for your apps

Posted by Kanyinsola Fapohunda – Software Engineer, and Geoffrey Boullanger – Technical Lead

Accurate time is crucial for a wide variety of app functionalities, from scheduling and event management to transaction logging and security protocols. However, a user can change the device’s time, so a more accurate source of time than the device’s local system time may be required. That's why we're introducing the TrustedTime API that leverages Google's infrastructure to deliver a trustworthy timestamp, independent of the device's potentially manipulated local time settings.

How does TrustedTime work?

The new API leverages Google's secure infrastructure to provide a trusted time source to your app. TrustedTime periodically syncs its clock to Google's servers, which have access to a highly accurate time source, so that you do not need to make a server request every time you want to know the current network time. Additionally, we've integrated a unique model that calculates the device's clock drift. This will inform you when the time may be inaccurate between network synchronizations.

Why is an accurate source of time important?

Many apps rely on the device's clock for various features. However, users can change their device's time settings, either intentionally or unintentionally, therefore changing the time that your app gets. This can lead to problems such as:

    • Data Inconsistency: Apps relying on chronological event ordering are vulnerable to data corruption if users manipulate device time. TrustedTime mitigates this risk by providing a trustworthy time source.
    • Security Gaps: Time-based security measures, like one-time passwords or timed access controls require an unaltered time source to be effective.
    • Unreliable Scheduling: Apps that depend on accurate scheduling, like calendar or reminder apps, can malfunction if the device clock (i.e. Unix timestamp) is incorrect.
    • Inaccurate Time: The device's internal clock can drift due to various factors, such as temperature, doze mode, battery level, etc. This can lead to problems in applications that require more precision. The TrustedTime API also provides the estimated error with the timestamps, so that you can ensure your app's time-sensitive operations are performed correctly.
    • Lack of Consistency Between Devices: Inconsistent time across devices can cause problems in multi-device scenarios, such as gaming or collaborative applications. The TrustedTime API helps ensure that all devices have a consistent view of time, improving the user experience.
    • Unnecessary Power and Data Consumption: TrustedTime is designed to be more efficient than calling an NTP server every time an app needs the current time. It avoids the overhead of repeated network requests by periodically syncing its clock with time servers. This synced time is then used as a reference point, and the TrustedTime API calculates the current time based on the device's internal clock. This approach reduces network usage and improves performance for apps that need frequent time checks.

TrustedTime Use Cases

The TrustedTime API opens up a range of possibilities for enhancing the reliability and security of your apps, with use cases in areas such as:

    • Financial Applications: Ensure the accuracy of transaction timestamps even when the device is offline, preventing fraud and disputes.
    • Gaming: Implement fair play by preventing users from manipulating the game clock to gain an unfair advantage.
    • Limited-Time Offers: Guarantee that promotions and offers expire at the correct time, regardless of the user's device settings.
    • E-commerce: Accurately track order processing and delivery times.
    • Content Licensing: Enforce time-based restrictions on digital content, like rentals or subscriptions.
    • IoT Devices: Synchronize clocks across multiple devices for consistent data logging and control.
    • Productivity apps: Accurately record the time of any changes made to cloud documents while offline.

Getting started with the TrustedTime API

The TrustedTime API is built on top of Google Play services, making integration seamless for most Android developers.

The simplest way to integrate is to initialize the TrustedTimeClient early in your app lifecycle, such as in the onCreate() method of your Application class. The following example uses dependency injection with Hilt to make the time client available to components throughout the app.

[Optional] Setup dependency injection

// TrustedTimeClientAccessor.kt
import com.google.android.gms.tasks.Task
import com.google.android.gms.time.TrustedTimeClient

interface TrustedTimeClientAccessor {
  fun createClient(): Task<TrustedTimeClient>
}

// TrustedTimeModule.kt
@Module
@InstallIn(SingletonComponent::class)
class TrustedTimeModule {
  @Provides
  fun provideTrustedTimeClientAccessor(
    @ApplicationContext context: Context
  ): TrustedTimeClientAccessor {
    return object : TrustedTimeClientAccessor {
      override fun createClient(): Task<TrustedTimeClient> {
        return TrustedTime.createClient(context)
      }
    }
  }
}

Initialize early in your app's lifecycle

// TrustedTimeDemoApplication.kt
@HiltAndroidApp
class TrustedTimeDemoApplication : Application() {

  @Inject
  lateinit var trustedTimeClientAccessor: TrustedTimeClientAccessor

  var trustedTimeClient: TrustedTimeClient? = null
    private set

  override fun onCreate() {
    super.onCreate()
    trustedTimeClientAccessor.createClient().addOnCompleteListener { task ->
      if (task.isSuccessful) {
        // Stash the client
        trustedTimeClient = task.result
      } else {
        // Handle error, maybe retry later
        val exception = task.exception
      }
    }
    // To use Kotlin Coroutine, you can use the await() method, 
    // see https://developers.google.com/android/guides/tasks#kotlin_coroutine for more info.
  }
}

NOTE: If you don't use dependency injection in your app. You can simply call
`TrustedTime.createClient(context)` instead of using a TrustedTimeClientAccessor.

Use TrustedTimeClient anywhere in your app

// Retrieve the TrustedTimeClient from your application class
  val myApp = applicationContext as TrustedTimeDemoApplication

  // In this example, System.currentTimeMillis() is used as a fallback if the
  // client is null (i.e. client creation task failed) or when there is no time
  // signal available. You may not want to do this if using the system clock is
  // not suitable for your use case.
  val currentTimeMillis =
    myApp.trustedTimeClient?.computeCurrentUnixEpochMillis()
        ?: System.currentTimeMillis()
  // trustedTimeClient.computeCurrentInstant() can be used if Instant is
  // preferred to long for Unix epoch times and you are able to use the APIs.

Use in short-lived components like Activity

@AndroidEntryPoint
class MainActivity : AppCompatActivity() {
  @Inject
  lateinit var trustedTimeAccessor: TrustedTimeAccessor

   private var trustedTimeClient: TrustedTimeClient? = null

  override fun onCreate(savedInstanceState: Bundle?) {
    super.onCreate(savedInstanceState)
    ...
    trustedTimeAccessor.createClient().addOnCompleteListener { task ->
      if (task.isSuccessful) {
          // Stash the client
          trustedTimeClient = task.result
        } else {
         // Handle error, maybe retry later or use another time source.
          val exception = task.exception
        }
    }
  }

  private fun getCurrentTimeInMillis() : Long? {
    return trustedTimeClient?.computeCurrentUnixEpochMillis()
  }
}

TrustedTime API availability and limitations

The TrustedTime API is available on all devices running Google Play services on Android 5 (Lollipop) and above. You need to add the dependency com.google.android.gms:play-services-time:16.0.1 (or above) to access the new API. No additional permission is required to use this API. However, TrustedTime needs an internet connection after the device starts up to provide timestamps. If the device hasn't connected to the internet since booting, the TrustedTime APIs won't return timestamps.

It’s important to note that the device's internal clock can drift due to factors like temperature, doze mode, and battery level. TrustedTime doesn't prevent this drift, but its APIs provide an error estimate for each timestamp. Use this estimate to determine if the timestamp's accuracy meets your application's requirements. While TrustedTime makes it more difficult for users to manipulate the time accessed by your app, it does not guarantee complete safety. Advanced techniques can still be used to tamper with the device’s time.

Next steps

To learn more about the TrustedTime API, check out the following resources:

Get ready for Google I/O May 20–21

Posted by the Google I/O team

Google I/O is back

Google I/O returns May 20 – 21! Join us online as we share our vision for the future of technology, along with updates across Android, AI, web, cloud, and more.

Tune in to learn how the latest AI models can help you build innovative apps and transform development workflows. We'll also share how we're making Android development even easier, and empowering you to build richer, more engaging web experiences.

Register now and tune in live

Head to the Google I/O website and register to receive updates. The livestreamed keynotes kick off on May 20th at 10 AM PT, and new this year, we’ll be streaming developer product keynotes live from Shoreline across both days!

Stay tuned for details about I/O Connect events this summer, and test your skills at solving the #GoogleIO puzzle to unlock bonus worlds and earn badges.

Timeline update: third-party autofill services support on Chrome on Android

Posted by Eiji Kitamura – Developer Advocate (@agektmr)

In October 2024, we announced that Chrome 131 will allow third-party autofill services on Android (like password managers) to natively autofill forms on websites. Reflecting on feedback from autofill service developers, we've decided to shift the schedule and allow the third-party autofill services from Chrome 135.

Native Chrome support for third-party autofill services on Android means that users will be able to use their preferred password manager or autofill service directly in Chrome, without having to rely on workarounds or extensions. This change is expected to improve the user experience and security for Android users who use third-party autofill services.

Based on developer feedback, we've fixed bugs, and have been working to make the new setting easier to discover. To support those goals, we've added the following capabilities:

    • An ability to query Chrome settings and learn whether the user wishes to use a third party autofill service
    • An ability to deep link to the Chrome settings page where users can enable third-party autofill services.

Read Chrome settings

Any app can read whether Chrome uses the 3P autofill mode that allows it to use Android Autofill. Chrome uses Android's ContentProvider to communicate that information. Declare in your Android manifest which channels you want to read settings from, e.g.:

<uses-permission android:name="android.permission.READ_USER_DICTIONARY"/>
<queries>
 <!-- To Query Chrome Beta: -->
 <package android:name="com.chrome.beta" />

 <!-- To Query Chrome Stable: -->
 <package android:name="com.android.chrome" />
</queries>

Then, use Android's ContentResolver to request that information by building the content URI as in this example code:

final String CHROME_CHANNEL_PACKAGE = "com.android.chrome";  // Chrome Stable.
final String CONTENT_PROVIDER_NAME = ".AutofillThirdPartyModeContentProvider";
final String THIRD_PARTY_MODE_COLUMN = "autofill_third_party_state";
final String THIRD_PARTY_MODE_ACTIONS_URI_PATH = "autofill_third_party_mode";

final Uri uri = new Uri.Builder()
                  .scheme(ContentResolver.SCHEME_CONTENT)
                  .authority(CHROME_CHANNEL_PACKAGE + CONTENT_PROVIDER_NAME)
                  .path(THIRD_PARTY_MODE_ACTIONS_URI_PATH)
                  .build();

final Cursor cursor = getContentResolver().query(
                  uri,
                  /*projection=*/new String[] {THIRD_PARTY_MODE_COLUMN},
                  /*selection=*/ null,
                  /*selectionArgs=*/ null,
                  /*sortOrder=*/ null);

cursor.moveToFirst(); // Retrieve the result;

int index = cursor.getColumnIndex(THIRD_PARTY_MODE_COLUMN);

if (0 == cursor.getInt(index)) {
  // 0 means that the third party mode is turned off. Chrome uses its built-in
  // password manager. This is the default for new users.
} else {
  // 1 means that the third party mode is turned on. Chrome uses forwards all
  // autofill requests to Android Autofill. Users have to opt-in for this.
}

Deep-link to Chrome settings

To deep-link to the Chrome settings page where users can enable third-party autofill services, use an Android Intent. Ensure to configure the action and categories exactly as in this example code:

Intent autofillSettingsIntent = new Intent(Intent.ACTION_APPLICATION_PREFERENCES);
autofillSettingsIntent.addCategory(Intent.CATEGORY_DEFAULT);
autofillSettingsIntent.addCategory(Intent.CATEGORY_APP_BROWSER);
autofillSettingsIntent.addCategory(Intent.CATEGORY_PREFERENCE);

// Invoking the intent with a chooser allows users to select the channel they want to 
// configure. If only one browser reacts to the intent, the chooser is skipped.
Intent chooser = Intent.createChooser(autofillSettingsIntent, "Pick Chrome Channel");
startActivity(chooser);

// If the caller knows which Chrome channel they want to configure, 
// they can instead add a package hint to the intent, e.g.
autofillSettingsIntent.setPackage("com.android.chrome");
startActivity(autofillSettingsInstent);

Updated timeline

To reflect the feedback and to leave time for autofill service developers to make relevant changes, we are shifting the plan. Users must select Autofill using another service in Chrome settings to ensure their autofill experience is unaffected. The new setting will become available in Chrome 135. Autofill services should encourage their users to toggle the setting, to ensure they have the best autofill experience possible with their service and Chrome on Android. Chrome plans to stop supporting the compatibility mode in summer 2025.

    • March 5th, 2025: Chrome 135 beta is available
    • April 1st, 2025: Chrome 135 is in stable
    • Summer 2025: Compatibility mode will no longer be available on Chrome