Tag Archives: Solve

CameraX update makes dual concurrent camera even easier

Posted by Donovan McMurray – Developer Relations Engineer

CameraX, Android's Jetpack camera library, is getting an exciting update to its Dual Concurrent Camera feature, making it even easier to integrate this feature into your app. This feature allows you to stream from 2 different cameras at the same time. The original version of Dual Concurrent Camera was released in CameraX 1.3.0, and it was already a huge leap in making this feature easier to implement.

Starting with 1.5.0-alpha01, CameraX will now handle the composition of the 2 camera streams as well. This update is additional functionality, and it doesn’t remove any prior functionality nor is it a breaking change to your existing Dual Concurrent Camera code. To tell CameraX to handle the composition, simply use the new SingleCameraConfig constructor which has a new parameter for a CompositionSettings object. Since you’ll be creating 2 SingleCameraConfigs, you should be consistent with what constructor you use.

Nothing has changed in the way you check for concurrent camera support from the prior version of this feature. As a reminder, here is what that code looks like.

// Set up primary and secondary camera selectors if supported on device.
var primaryCameraSelector: CameraSelector? = null
var secondaryCameraSelector: CameraSelector? = null

for (cameraInfos in cameraProvider.availableConcurrentCameraInfos) {
    primaryCameraSelector = cameraInfos.first {
        it.lensFacing == CameraSelector.LENS_FACING_FRONT
    }.cameraSelector
    secondaryCameraSelector = cameraInfos.first {
        it.lensFacing == CameraSelector.LENS_FACING_BACK
    }.cameraSelector

    if (primaryCameraSelector == null || secondaryCameraSelector == null) {
        // If either a primary or secondary selector wasn't found, reset both
        // to move on to the next list of CameraInfos.
        primaryCameraSelector = null
        secondaryCameraSelector = null
    } else {
        // If both primary and secondary camera selectors were found, we can
        // conclude the search.
        break
    }
}

if (primaryCameraSelector == null || secondaryCameraSelector == null) {
    // Front and back concurrent camera not available. Handle accordingly.
}

Here’s the updated code snippet showing how to implement picture-in-picture, with the front camera stream scaled down to fit into the lower right corner. In this example, CameraX handles the composition of the camera streams.

// If 2 concurrent camera selectors were found, create 2 SingleCameraConfigs
// and compose them in a picture-in-picture layout.
val primary = SingleCameraConfig(
    cameraSelectorPrimary,
    useCaseGroup,
    CompositionSettings.Builder()
        .setAlpha(1.0f)
        .setOffset(0.0f, 0.0f)
        .setScale(1.0f, 1.0f)
        .build(),
    lifecycleOwner);
val secondary = SingleCameraConfig(
    cameraSelectorSecondary,
    useCaseGroup,
    CompositionSettings.Builder()
        .setAlpha(1.0f)
        .setOffset(2 / 3f - 0.1f, -2 / 3f + 0.1f)
        .setScale(1 / 3f, 1 / 3f)
        .build()
    lifecycleOwner);

// Bind to lifecycle
ConcurrentCamera concurrentCamera =
    cameraProvider.bindToLifecycle(listOf(primary, secondary));

You are not constrained to a picture-in-picture layout. For instance, you could define a side-by-side layout by setting the offsets and scaling factors accordingly. You want to keep both dimensions scaled by the same amount to avoid a stretched preview. Here’s how that might look.

// If 2 concurrent camera selectors were found, create 2 SingleCameraConfigs
// and compose them in a picture-in-picture layout.
val primary = SingleCameraConfig(
    cameraSelectorPrimary,
    useCaseGroup,
    CompositionSettings.Builder()
        .setAlpha(1.0f)
        .setOffset(0.0f, 0.25f)
        .setScale(0.5f, 0.5f)
        .build(),
    lifecycleOwner);
val secondary = SingleCameraConfig(
    cameraSelectorSecondary,
    useCaseGroup,
    CompositionSettings.Builder()
        .setAlpha(1.0f)
        .setOffset(0.5f, 0.25f)
        .setScale(0.5f, 0.5f)
        .build()
    lifecycleOwner);

// Bind to lifecycle
ConcurrentCamera concurrentCamera =
    cameraProvider.bindToLifecycle(listOf(primary, secondary));

We’re excited to offer this improvement to an already developer-friendly feature. Truly the CameraX way! CompositionSettings in Dual Concurrent Camera is currently in alpha, so if you have feature requests to improve upon it before the API is locked in, please give us feedback in the CameraX Discussion Group. And check out the full CameraX 1.5.0-alpha01 release notes to see what else is new in CameraX.

Max implemented UI changes 30% faster using Jetpack Compose

Posted by Tomáš Mlynarič, Developer Relations Engineer

Max®, which launched in the US on May 23, 2023, is an enhanced streaming platform from Warner Bros. Discovery, delivering unparalleled quality content for everyone in the household. Max developers want to provide the best UX possible, and they’re always searching for new ways to do that. That’s why Max developers built the app using Jetpack Compose, Android’s modern declarative toolkit for creating native UI. Building Max’s UI with Compose set the app up for long-term success, enabling developers to build new experiences in a faster and easier way.

Compose streamlines development

Max is the latest app from Warner Bros. Discovery and builds on the company’s prior learnings from HBO Max and discovery+. When Max development began in late 2022, developers had already used Compose to build the content discovery feature on discovery+—one of its core UI features.

“It was natural to continue our adoption of Compose to the Max platform,” said Boris D’Amato, Sr. Staff Software Engineer at Max.

Given the team’s previous experience using Compose on discovery+, they knew it would streamline development and improve the app’s maintainability. In the end, building Max with Compose reduced the app’s boilerplate code, increased the re-usability of its UI elements, and boosted developer productivity overall.

“Compose significantly reduced the time required to implement UI changes, solving the pain point of maintaining a large, complex UI codebase and making it easier to iterate on the app's design and user experience,” said Boris.

Today, Max’s UI is built almost entirely with Compose, and developers estimate that adopting Compose allowed them to implement UI changes 30% faster than with Views. Thanks to the toolkit’s modular nature, developers could build highly reusable components and adapt or combine them to form new UI elements, creating a more cohesive app design.

Compose significantly reduced the time required to implement UI changes, solving the pain point of maintaining a large, complex UI codebase and making it easier to iterate on the app's design and user experience,” — Boris D’Amato, Sr. Staff Software Engineer at Max

More improvements with Compose

Today, Compose is so integral to Max's design that the app's entire UI architecture is designed specifically to support Compose. For example, developers built a system to dynamically render server-driven, editorially curated content and user-personalized recommendations without having to ship a new version of the app. To support this system, developers relied on the best practices when architecting Compose apps, leveraging Compose's smart recompositioning and skipability for the smoothest experience possible.

Much like the discovery+ platform, Compose is also used for Max’s content discovery feature. This feature helps Max serve tailored content to each user based on how they use the app. Thanks to Compose, it was easy for developers to ensure this feature worked as intended because it allowed them to test each part in manageable segments.

“One of the features most impacted by using Compose was our content discovery system. Compose enabled us to create a highly dynamic and interactive interface that adapts in real-time to user context and preferences,” said Boris.

Adapting to users’ unique needs is another reason Compose has impressed Max developers. Compose makes it easy to support the many different screens and form factors available on the market today. With the Window size classes API, Max can scale its UI in real time to accommodate screen size and shape variations for tablets and foldables.

Examples of UX on large and small screens

The future with Compose

Since adopting Compose, the Max team has noticed increased interest from prospective job candidates excited about working with the latest Android technologies.

“Whenever we mention that Max is built using Compose, the excitement in the candidates is palpable. It indicates that we’re investing in keeping our tech stack updated and our focus on the developer experience,” said Boris.

Looking ahead, the Max team plans to lean further into its Compose codebase and make even more use of the toolkit’s features, like animation APIs, predictive gestures, and widgets.

“I absolutely recommend Jetpack Compose. Compose's declarative approach to UI development allows for a more intuitive and efficient design process, making implementing complex UIs and animations easy. Once you try Compose, there’s no going back,” said Boris.

Get started

Optimize your UI development with Jetpack Compose.

Developers for adidas CONFIRMED build features 30% faster using Jetpack Compose

Posted by Nick Butcher – Product Manager for Jetpack Compose, and Florina Muntenescu – Developer Relations Engineer

adidas CONFIRMED is an app for the brand’s most loyal fans who want its latest, curated collections that aren’t found anywhere else. The digital storefront gives streetwear, fashion, and style enthusiasts access to adidas' most exclusive drops and crossovers so they can shop them as soon as they go live. The adidas CONFIRMED team wants to provide users a premium experience, and it’s always exploring new ways to elevate the app’s UX. Today, its developers are more equipped than ever to improve the in-app experience using Jetpack Compose, Android’s modern declarative toolkit for building UI.

Improving the UX with Jetpack Compose

adidas CONFIRMED designers conduct quarterly consumer surveys for feedback from users regarding new app flows and UI enhancements. Their surveys revealed that 80% of the app’s users prefer animated visuals because animations encourage them to explore and interact with the app more. adidas CONFIRMED developers wanted to implement new design elements and animations across the app’s interface to strengthen engagement, but the app’s previous View-based system limited their ability to create engaging UX in a scalable way.

“We decided to build dynamic elements and animations across many of our screens and user journeys,” said Rodrigo Represa, an Android engineer at adidas. “We had an ambitious list of UI updates we wanted to make and started looking for solutions to help us achieve them.”

Switching to Compose allowed adidas CONFIRMED developers to create features faster than ever. The improvement in engineering efficiency has been noticeable, with the team estimating that Compose enables them to create new features roughly 30% faster than with Views. Today, more than 80% of the app’s UI has been migrated to Compose.

“I can build the same feature with Compose about 30% faster than with Views.” — Rodrigo Represa, Android engineer at adidas

Innovating the in-app experience

As part of the app’s new interface update, adidas CONFIRMED developers created an exciting, animated experience called Shoes Tournament. This competition positions different brand-collaborator sneakers head to head in a digital tournament where users vote for their favorite shoe. It took two developers only three months to build this feature from the ground up using Compose. And users loved it — it increased the app’s weekly active users by 8%.

UX screen of shoe tournament. Top shoe is clicked. Text reads: It took adidas' Android devs only 3 months to build this feature from the ground up using Compose.

Before transitioning to Compose, it was hard for the team to customize the adidas CONFIRMED app to incorporate branding from its collaborators. With Compose, it’s easy. For instance, the app’s developers can now create a dynamic design system using CompositionLocals. This functionality helps developers update the app's appearance during collab launches, providing a more appealing user experience while maintaining a consistent and clean design.

One of the most exciting animations adidas CONFIRMED developers added utilized device sensors. Users can view and interact with the products they’re looking at on product display pages by simply moving their devices, just as if they were holding the product in real life. Developers used Compose to create realistic lighting effects for the animation to make the viewing experience more engaging.

An easier way to build UI

Using composables allowed adidas CONFIRMED developers to reuse existing components. As both the flagship adidas app and the adidas CONFIRMED app are part of the same monorepo, engineers could reuse composables across both apps, like forms and lists, enabling them to implement new features quickly and easily.

“The accelerated development with Compose provided our team of seven with more time, enabling us to strike a healthy balance between delivering new functionalities and ensuring the long-term health and sustainability of our app,” said Rodrigo.

Compose also helped to improve app stability and performance for the team. They noticed a significant reduction in app-related crashes, and have seen virtually no UI-related crashes, since migrating the app to Compose. The team is proud to provide a 99.9% crash-free user experience.

Compose’s efficiency not only accelerated development, but also helped us achieve our business goals.” — Rodrigo Represa, Android engineer at adidas

A better app built with the future in mind

Compose opened doors to implementing new features faster than ever. With Compose’s clean and concise usage of Kotlin, it was easy for developers to create the ambitious and engaging interface adidas CONFIRMED users wanted. And the team doesn’t plan to stop there.

The adidas CONFIRMED team wants to lean further into its new codebase and fully adopt Compose moving forward. They also want to bring the app to new screens using more of the Compose suite and are currently developing an app widget using Jetpack Glance. This new experience will provide users with a streamlined feed of new product information for an even more efficient user experience.

“I recommend Compose because it simplifies development and is a more intuitive and powerful approach to building UI,” said Rodrigo.

Get started

Optimize your UI development with Jetpack Compose.

Jetpack Compose compiler moving to the Kotlin repository

Posted by Ben Trengrove - Developer Relations Engineer, Nick Butcher - Product Manager for Jetpack Compose

We are excited to announce that with the upcoming release of Kotlin 2.0, the Jetpack Compose compiler will move to the Kotlin repository. This means that a matching Compose compiler will release alongside each release of Kotlin. You will no longer have to wait for a matching Compose compiler release before upgrading the Kotlin version in your Compose app. The Compose team at Google will continue to be responsible for developing the compiler and will work closely with JetBrains, our co-founders of the Kotlin Foundation. The version of the Compose compiler now always matches the Kotlin version. The compiler version is therefore jumping to 2.0.0.

To simplify the set up of Compose, we are also releasing a new Compose Compiler Gradle plugin which lets you configure the Compose compiler with a type safe API. The Compose Compiler Gradle plugin’s versioning matches Kotlin’s, and it is available from Kotlin 2.0.0.

To migrate to the new plugin, add the Compose Compiler Gradle plugin dependency to the plugins section of your Gradle version catalog:

[versions]
kotlin = "2.0.0"

[plugins]
org-jetbrains-kotlin-android = { id = "org.jetbrains.kotlin.android", version.ref = "kotlin" }

// Add the Compose Compiler Gradle plugin, the version matches the Kotlin plugin
compose-compiler = { id = "org.jetbrains.kotlin.plugin.compose", version.ref = "kotlin" }

In your project’s root level Gradle file, add the plugin:

plugins {
   // Existing plugins 
   alias(libs.plugins.compose.compiler) apply false
}

Then in modules that use Compose, apply the plugin:

plugins {
   // Existing plugins
   alias(libs.plugins.compose.compiler)
}

The kotlinCompilerExtensionVersion is no longer required to be configured in composeOptions and can be removed.

composeOptions {
   kotlinCompilerExtensionVersion = libs.versions.compose.compiler.get()
}

If required, you can now add a top level section to the same Gradle file to configure options for the Compose compiler.

android { ... }

composeCompiler {
   enableStrongSkippingMode = true
}

You might currently directly referencing the Compose compiler in your build setup, rather than using AGP to apply the compose compiler plugin. If that is the case, note that the maven artifacts will also change:

Old

New

androidx.compose.compiler:compiler

org.jetbrains.kotlin:kotlin-compose-compiler-plugin-embeddable

androidx.compose.compiler:compiler-hosted

org.jetbrains.kotlin:kotlin-compose-compiler-plugin


For an example of this migration, see this pull request.

For more information on migrating to the new Compose compiler artifact, including instructions for non-version catalog setups, see our updated documentation.

How to effectively A/B test power consumption for your Android app’s features

Posted by Mayank Jain - Product Manager, and Yasser Dbeis - Software Engineer; Android Studio

Android developers have been telling us they're looking for tools to help optimize power consumption for different devices on Android.

The new Power Profiler in Android Studio helps Android developers by showing power consumption happening on devices as the app is being used. Understanding power consumption across Android devices can help Android developers identify and fix power consumption issues in their apps. They can run A/B tests to compare the power consumption of different algorithms, features or even different versions of their app.

The new Power Profiler in Android Studio
The new Power Profiler in Android Studio

Apps which are optimized for lower power consumption lead to an improved battery and thermal performance of the device, which means an improved user experience on Android.

This power consumption data is made available through the On Device Power Monitor (ODPM) on Pixel 6+ devices, segmented by each sub-system called “Power Rails”. See Profileable power rails for a list of supported sub-systems.

The Power Profiler can help app developers detect problems in several areas:

    • Detecting unoptimized code that is using more power than necessary.
    • Finding background tasks that are causing unnecessary CPU usage.
    • Identifying wakelocks that are keeping the device awake when they are not needed.

Once a power consumption issue has been identified, the Power Profiler can be used when testing different hypotheses to understand why the app could be consuming excessive power. For example, if the issue is caused by background tasks, the developer can try to stop the tasks from running unnecessarily or for longer periods. And if the issue is caused by wakelocks, the developer can try to release the wakelocks when the resource is not in use or use them more judiciously. Then compare the power consumption before/after the change using the Power Profiler.

In this blog post, we showcase a technique which uses A/B testing to understand how your app’s power consumption characteristics might change with different versions of the same feature - and how you can effectively measure them.

A real-life example of how the Power Profiler can be used to improve the battery life of an app.

Let’s assume you have an app through which users can purchase their favorite movies.

Sample app to demonstrate A/B testing for measure power consumption
Sample app to demonstrate A/B testing for measure power consumption 
Video (c) copyright Blender Foundation | www.bigbuckbunny.org

As your app becomes popular and is used by more users, you realize that a high quality 4K video takes very long to load every time the app is started. Because of its large size, you want to understand its impact on power consumption on the device.

Originally, this video was in 4K quality in the best of intentions, so as to showcase the best possible movie highlights to your customers.

This makes you think…

    • Do you really need a 4K video banner on the home screen?
    • Does it make sense to load a 4K video over the network every time your app is run?
    • How will the power consumption characteristics of your app change if you replace the 4K video with something of lower quality (while still preserving the vivid look & feel of the video)?

This is a perfect scenario to perform an A/B test for power consumption

With an A/B test, you can test two slightly different variations of the video banner feature and choose the one with the better power consumption characteristics.

Scenario A : Run the app with 4K video banner on screen & measure power consumption

Scenario B : Run the app with lower resolution video banner on screen & measure power consumption

A/B Test setup

Let's take a moment and set up our Android Studio profiler to run this A/B test. We need to start the app and attach the CPU profiler to it and trigger a system trace (where the Power Profiler will be shown).

Step 1

Create a custom “Run configuration” by clicking the 3 dot menu > Edit

Custom run configuration
Custom run configuration

Step 2

Then select the “Profiling” tab and ensure that “Start this recording on startup” and CPU Activity > System Trace is selected. Then click “Apply”.

Edit configuration settings
Edit configuration settings

Now simply run the “Profile app startup profiling with low overhead” whenever you want to run this app from start and attach the CPU profiler to it.

Note on precision

The following example scenarios use the entire app startup for estimating the power consumption for this blog’s purpose. However you can use more advanced techniques to have even higher precision in getting power readings. Some techniques to try are:

    • Isolate and measure power consumption for video playback only after a tap event on the video player
    • Use the trace markers API to mark the start and stop time for power measurement timeline - and then only measure power consumption within that marked window

Scenario A

In this scenario, we run the app with 4K video playing and measure power consumption for the first 30 seconds. We can optionally also run the scenario A multiple times and average out the readings. Once the System trace is shown in Android Studio, select the 0-30 second time range from the timeline selection panel and record as a screenshot for comparing against scenario B

Power consumption in scenario A - playing a 4k video
Power consumption in scenario A - playing a 4k video

As you can see, the average power consumed by WLAN, CPU cores & Memory combined is about 1,352 mW (milliwatts)

Now let's compare and contrast how this power consumption changes in Scenario B

Scenario B

In this scenario, we run the app with low quality video playing and measure power consumption for the first 30 seconds. As before, we can also optionally run scenario B multiple times and average out the power consumption readings. Again, once the System trace is shown in Android Studio, select the 0-30 second time range from the timeline selection panel.

Power consumption in scenario B - playing a lower quality video
Power consumption in scenario B - playing a lower quality video

The total power consumed by WLAN, CPU Little, CPU Big and CPU Mid & Memory is about 741 mW (milliwatts)

Conclusion

All else being equal, Scenario B (with lower quality video) consumed 741 mW power as compared to Scenario A (with 4K video) which required 1,352 mW power.

Scenario B (lower quality video) took 45% less power than Scenario A (4K) - while the lower quality video provides little to no visual difference in perceived quality of the app’s screen.

As a result of this A/B test for power consumption, you conclude that replacing the 4K video with a lower quality video on our app’s home screen not only reduces power consumption by 45%, also reduces the required network bandwidth and can potentially also improve the thermal performance of the devices.

If your app’s business logic still requires the 4K video to be shown on the app’s screen, you can explore strategies like:

    • Caching the 4K video across subsequent runs of the app.
    • Loading video on a user tap.
    • Loading an image initially and only load the video after the screen has fully rendered (delayed loading).

The overall power consumption numbers presented in the above A/B test scenario might seem small, but it shows the techniques that app developers can use to effectively A/B test power consumption for their app’s features using the Power Profiler in Android Studio.

Next Steps

The new Power Profiler is available in Android Studio Hedgehog onwards. To know more, please head over to the official documentation.

Google Drive cut code and development time in half with Jetpack Compose and new architecture

Posted by Nick Butcher – Product Manager for Jetpack Compose, and Florina Muntenescu – Developer Relations Engineer

As one of the world’s most popular cloud-based storage services, Google Drive lets people do more than just store their files online. With Drive, users can synchronize, share, search, edit, and even pin specified files and content for safe and secure offline use.

Recently, Drive’s developers revamped the application’s home screen to provide a more seamless experience across devices, matching updates made to Google Drive’s web version. However, the app’s previous architecture and codebase would’ve prevented the team from completing the updates in a timely manner.

Instead of struggling with the app’s previous tech stack to implement the update, the Drive team rebuilt the home page from the ground up using Android’s recommended architecture and Jetpack Compose, Android’s modern declarative toolkit for creating native UI.

Compose, combined with architecture improvements, cut our development time nearly in half.” — Dale Hawkins, Senior software engineer and tech lead at Google Drive

Experimenting with Kotlin and Compose

The Drive team experimented with Kotlin — which the Compose toolkit is built with — for several months before planning the app’s home screen rebuild. Drive’s developers liked Kotlin’s improved syntax and null enforcement, making it easier to produce code.

“We had been using RxJava, but started looking into replacing that with coroutines,” said Dale Hawkins, the features team lead for Google Drive. “This led to a more natural alignment between coroutines and Jetpack Compose. After a deep dive into Compose, we came away with a clear understanding of how Compose has numerous benefits over the Views-based approach.”

Following the Kotlin exploration, Dale experimented with Jetpack Compose. “I was pleased with how easy it was to build the UI using Compose. So I continued the experiment after that week,” said Dale. “I eventually rewrote the feature using Compose.”

Using Compose

Shortly after experimenting with Jetpack Compose, the Drive team decided to use it to completely rebuild the app’s home screen UI.

“We wanted to make some major changes to match the ones being done for the web version, but that project had a several-month head start. We wanted to release the Android version shortly after the web changes went live to ensure our users have a seamless Google Drive experience across devices,” said Dale.

The Drive team's experimentation and testing with Jetpack Compose showed that the new toolkit was powerful and reliable and that it would enable them to move faster. With this in mind, the Drive team decided to step away from their old codebase and embrace Jetpack Compose for the app’s home screen update. Not only would it be quicker and easier, but it would also better prepare the team to easily make future UI changes.

Using Android’s guidance for architecture

Before going all-in with Jetpack Compose, Drive developers wanted to restructure the application by implementing a completely new app architecture. Drive developers followed Android’s official architecture guidance to apply structural changes, paving the way for the new Kotlin codebase.

“The recommended architecture reinforces good separation between layers,” said Quintin Knudsen, an Android engineer for Google Drive. “We work in a highly dynamic environment and need to be able to adjust to any app changes. Using well-defined and independent layers helps isolate any changes or UI requirements. The recommendations from Android offered sound ways to structure the layers.” With a clear separation between the app’s data and UI layers, developers could work in parallel to significantly speed up testing and development.

Drive developers also relied on Mappers and UseCases when creating the new architecture. These patterns allowed them to create flexible code that is easier to manage. They also exposed flows from their ViewModels to make the UI respond immediately to any data changes, making it much simpler to implement and understand UI updates.

Less code, faster development

With the app’s newly improved architecture and Jetpack Compose, the Drive team was able to develop the app’s new home screen in less than half the time that they expected. They also implemented the new code and finished quality assurance testing nearly seven weeks ahead of schedule.

“Thanks to Compose, we had the groundwork done within a couple of weeks. We delivered a great implementation over a month ahead of schedule, and it’s been praised by product, UX, and even other engineering teams,” said Dale.

Despite having fewer features, the original home screen required over 12,000 lines of code. The new Compose-based home screen has many new features and only required 5,100 lines of code—a 57% reduction. Having less code makes it much easier for developers to maintain the app and implement any updates.

Testing the new UI in Jetpack Compose also required significantly less code. Before Compose, Drive developers used roughly 9,000 lines of code to test about 62% of the UI. With Compose, it took only 2,200 lines to test over 80% of the new UI.

The original home screen required over 12,000 lines of code. The Compose-based home screen only required 5,100 lines of code. That’s a 57% reduction.” — Dale Hawkins, Senior software engineer and tech lead at Google Drive

Looking forward

A new and improved app architecture paired with Jetpack Compose allowed Drive developers to rebuild the app’s home screen UI faster and easier than they could’ve imagined. The Drive team plans to expand its use of Compose within the application for things like supporting large dynamic displays and text resizing.

“As we work on new projects, we’re taking the opportunity to update older UI code to make use of our new architecture and Compose. The new code will be objectively better and features will be easier to write, test, and maintain,” said Dale.

Get started

Improve app architecture using Android’s official architecture guidance and optimize your UI development with Jetpack Compose.

Better, faster, stronger time zone updates on Android

Posted by Almaz Mingaleev – Software Engineer and Masha Khokhlova – Technical Program Manager

It's that time of year again when many of us move our clocks! Oh wait, your Android devices did it automatically, didn’t they? For Android users living in many countries, this may not be surprising. For example, the US, EU and UK governments haven't changed their time legislation in a while*, so users wake up every morning to see the correct time.

But, what happens when time laws change? If you look globally, governments can and do change their time laws, sometimes every year, and Android devices have to keep up to support our global user base.

To implement a region’s time legislation, Android devices have to follow a set of encoded rules. What are these rules? Let’s start with why rules are needed in the first place. Clearly, 7am in Los Angeles and 7am in London are not the same time. Moreover, if you are in London and want to know the time in Los Angeles, you have to know how many hours to subtract, and this is not fixed throughout the year**. So to tell local time (time your watches should show) it is convenient to have a reference clock that everybody on the planet agrees on. This clock is named UTC, coordinated universal time. Local time in London during winter matches UTC, during summer it is calculated by adding one hour to UTC, usually referred to as UTC+1. For Los Angeles local time during summer is UTC-8 (8 hours behind, UTC offset is -8 hours) and during winter it is UTC-7 correspondingly. When a region changes from one offset to another, we call that a “transition”. Combination of these offsets and rules when a transition happens (such as “last Sunday of March” or “first Sunday on or after 8th March”) defines a time zone. For some countries, the time zone rules can be very simple and primarily determined by their chosen UTC offset: “no transitions, we don’t move our clocks forwards and backwards”.

Governments can decide to change the UTC offset for regions, introduce new time zone regions, or alter the day that daylight saving transitions occur. When governments do this, the time zone rules on every Android device needs to be updated, otherwise the Android device will continue to follow the old rules, which can lead to an incorrect local time being shown to users in the affected areas.

Android is not alone in needing to keep track of this information. Fortunately, there is a database supported by IANA (Internet Assigned Numbers Authority) and maintained by a small group of volunteers known as the TZDB (Time Zone Database) which is used as a basis for local timekeeping on most modern operating systems. The TZDB contains most of the information that Android needs.

There is no schedule, but typically the TZDB releases a new update 4-5 times a year. The Android team wants to release updates that affect its devices as soon as possible.

How do these changes reach your devices?

    1.    Government signs a law / decree.

    2.    Someone lets IANA know about these changes

    3.    Depending on how much lead time was given and changes announced by other countries IANA publishes a new TZDB release.

    4.    The Android team incorporates the TZDB release (along with a small amount additional information we obtain from related projects and derive ourselves) into our codebase.

    5.    We roll-out these updates to your devices. How the roll-out happens depends on the type and age of the Android device.

        a.    Many mobile Android devices are covered by Google’s Project Mainline, which means that Google sends updates to devices directly.

        b.    Some devices are handled by the device’s manufacturer who takes the Android team’s source code updates and releases them to devices themselves according to their own update schedule.

As you can see, there are quite a few steps. Applying, testing and releasing an update can take weeks. And it is not just Android and other computer operating systems like it who need to take action. There are usually telecoms, banks, airlines and software companies that have to make adjustments to their own systems and time tables. Citizens of a country need to be made aware of changes so they know what to expect, especially if they are using older devices that might not receive necessary updates. And it all takes time and can cause problems for countless people if it isn’t handled well. The amount of disruption caused by a change is usually determined by the clarity of the legislation and notice period that governments provide. The TZDB volunteers are good at spotting changes, but it helps if the governments notify IANA directly, especially when it’s not clear the exact regions or existing laws affected. Unfortunately, many of the recent time zone changes were given with about a month or less notice time. Android has a set of recommendations for how much notice to provide. Other operating systems have similar recommendations.

Android is constantly evolving. One of such improvements, Project Mainline, introduced in Android 10, has made a big difference in how we update important parts of the Android operating system. It allows us to deliver select AOSP components directly through Google Play, making updates faster than a full OTA update and reducing duplication of efforts done by each OEM.

From the beginning, time zone rules were a component in Mainline, called Time Zone Data or tzdata module. This integration allowed us to react more quickly to government-mandated time zone changes than before. However until 2023 tzdata updates were still bundled with other Mainline changes, sometimes leading to testing complexities and slower deployment.

In 2023, we made further investments in Mainline's infrastructure and decoupled the tzdata module from the other components. With this isolation, we gained the ability to respond rapidly to time zone legislation changes — often releasing updates to Android users outside of the established release cadence. Additionally, this change means time zone updates can reach a far greater number of Android devices, ensuring you as Android users always see the correct time.

So while your Android phone may not be able to restore that lost hour of sleep, you can rest assured that it will show the accurate time, thanks to volunteers and the Android team.

Curious about the ever-changing world of time zones? Explore the IANA Time Zone Database and learn more about how time and time zones are managed on Android.


*In 2018-2019 there were changes in Alaska. This is a blogpost, not a technical documentation!

**Because the US and UK apply their daylight saving changes at different local times and on different days of the year.

Embracing Android 14: Meta’s Early Adoption Empowered Enhanced User Experience

Posted by Terence Zhang – Developer Relations Engineer, Google; in partnership with Tina Ho - Partner Engineering, TPM and Kun Wang – Partner Engineering, Partner Engineer

With the first Developer Preview of Android 15 now released, another new Android release that brings new features and under-the-hood improvements for billions of users worldwide will be coming shortly. As Android developers, you are key players in this evolution; by staying on top of the targetSDK upgrade cycle, you are making sure that your users have the best possible experience.

The way Meta, the parent company of Instagram, Facebook, WhatsApp, and Messenger, approached Android 14 provides a blueprint for both developer success and user satisfaction. Meta improved their velocity towards targetSDK adoption by 4x, and so to understand more about how they built this, we spoke to the team at Meta, with an eye towards insights that all developers could build into their testing programs.

Meta’s journey on A14: A blueprint for faster adoption

When Android 11 launched, some of Meta’s apps experienced challenges with existing features, such as Chat Heads, and with new requirements, like scoped storage integration. Fixing these issues was complicated by slow developer tooling adoption and a decentralized app strategy. This experience motivated Meta to create an internal Android OS Readiness Program which focuses on prioritizing early and thorough testing throughout the Android release window and accelerating their apps’ targetSDK adoption.

The program officially launched last year. By compiling apps against each Android 14 beta and conducting thorough automated and smoke tests to proactively identify potential issues, Meta was able to seamlessly adopt new Android 14 features, like Foreground Service types and send timely feedback and bug reports to the Android team, contributing to improvements in the OS.

Meta also accelerated their targetSDK adoption for Android 14—updating Messenger, Facebook, and Instagram within one to two months of the AOSP release, compared to seven to nine months for Android 12 (an increase of velocity of more than 4x!). Meta’s newly created readiness program unlocked this achievement by working across each app to adopt latest Android changes while still maintaining compatibility. For example, by automating and simplifying their SDK release process, Meta was able to cut rollout time from three weeks to under three hours, enhancing cooperation between individual app teams by providing immediate access to the latest SDKs and allowing for rapid testing of new OS features. The centralized approach also meant Threads adopted Android 14 support quickly despite the fast-growing new app being supported by a minimal team.

Reaping the rewards: The impact on users

Meta's early targetSDK adoption strategy delivers significant benefits for users as well. Here's how:

    • Improved reliability and compatibility: Early adoption of Android previews and betas prevented surprises near the OS launch, guaranteeing a smooth day-one experience for users upgrading to the latest Android version. For example, with partial media permissions, Meta's extensive experimentation with permission flows ensured “users felt informed about the change and in control over their privacy settings,” while maximizing the app's media-sharing functionality.

    • Robust experimentation with new release features: Early Android release adoption gave Meta ample time to collaborate across privacy, design, and content strategy teams, enabling them to thoughtfully integrate the new Android features that come with every release. This enhanced the collaboration on other features, allowing Meta to roll out Ultra HDR image experience on Instagram within 3 months of platform release in an “Android first” manner is a great example of this, delighting users with brighter and richer colors with a higher dynamic range in their Instagram posts and stories.
Meta's adoption of Ultra HDR in Android 14 brings brighter colors and dynamic range to Instagram posts and stories.
Meta's adoption of Ultra HDR in Android 14 brings brighter colors and dynamic range to Instagram posts and stories.

Embrace the latest Android versions

Meta's journey highlights the compelling reasons for Android developers to adopt a similar forward-thinking mindset in working with the Android betas:

    • Test your apps early: Anticipate Android OS changes, ensuring your apps are prepared for the latest target SDK as soon as they become available to create a seamless transition for users who update to the newest Android version.

    • Utilize latest tools to optimize user experience: Test your apps thoroughly against each beta to identify and address any potential issues. Check the Android Studio Upgrade Assistant to highlight major breaking changes in each targetSDKVersion, and integrate the compatibility framework tool into your testing process to help uncover potential app issues in the new OS version.

    • Collaborate with Google: Provide your valuable feedback and bug reports using the Google issue tracker to contribute directly to the improvement of the Android ecosystem.

We encourage you to take full advantage of the Android Developer Previews & Betas program, starting with the newly-released Android 15 Developer Preview 1.

The team behind the success

A big thank you to the entire Meta team for their collaboration in Android 14 and in writing this blog! We’d especially like to recognize the following folks from Meta for their outstanding contributions in establishing a culture of early adoption:

    • Tushar Varshney - Partner Engineering, Partner Engineer
    • Allen Bae - Partner Engineering, EM
    • Abel Del Pino - Facebook, SWE
    • Matias Hanco - Facebook, SWE
    • Summer Kitahara - Instagram, SWE
    • Tom Rozanski - Messenger, SWE
    • Ashish Gupta - WhatsApp, SWE
    • Daniel Hill - Mobile Infra, SWE
    • Jason Tang - Facebook, SWE
    • Jane Li - Meta Quest, SWE

How recommerce startup Beni uses AI to help you shop secondhand

Posted by Lillian Chen – Global Brand and Content Marketing Manager, Google Accelerator Programs

Sarah Pinner’s passion to reduce waste began as a child when she would reach over and turn off her sibling’s water when they were brushing their teeth. This passion has fueled her throughout her career, from joining zero-waste grocery startup Imperfect Foods to co-founding Beni, an AI-powered browser extension that aggregates and recommends resale options while users shop their favorite brands. Together with her co-founder and Beni CTO Celine Lightfoot, Sarah built Beni to make online apparel resale accessible to everyday shoppers in order to accelerate the circular economy and reduce the burden of fashion on the planet.

Sarah explains how the platform helps connect shoppers to secondhand clothing: “Let’s say you’re looking at a Nike shoe. While on the Nike site, Beni pulls resale listings for that same shoe from over 40 marketplaces like Poshmark or Ebay or TheRealReal. Users can simply buy the resale version instead of new to save money and purchase more sustainably. On average, Beni users save about 55% from the new item, and it’s also a lot more sustainable to buy the item secondhand.”

Beni was one of the first companies in the recommerce platform software space, and the competitive landscape is growing. “The more recommerce platforms the better, but Beni is ahead in terms of our partnerships and access to data as well as the ability to search across data,” says Sarah.


How Beni Uses AI

AI helps Beni to ingest all data feeds from their 40+ partnerships into Beni’s database so they can surface the most relevant resale items to the shopper. For example, when Beni receives eBay’s feed for a product search, there may be 100,000 different sizes. The team has trained the Beni model to normalize sizing data. That’s one piece of their categorization.

“When we first started Beni, the intention wasn’t to start a company. It was to solve a problem, and AI has been a great tool to be able to do that,” says Sarah.


Participating in Google for Startups Accelerator: Circular Economy

Beni’s product was built using Google technology, is hosted on Google Cloud and utilizes Vision API Product Search, Vertex AI, BigQuery, and the Chrome web store.

When they heard about the Google for Startups Accelerator: Circular Economy program, it seemed like the perfect fit. “Having been in the circular economy space, and being a software business already using a plethora of Google products, and having a Google Chrome extension - getting plugged into the Google world gave us great insights about very niche questions that are very hard to find online,” says Sarah.

As an affiliate business in resale, Beni’s revenue per transaction is low—a challenge for a business model that requires scale. The Beni team worked one-on-one with Google mentors to best use Google tools in a cost-effective way. Keeping search results relevant is a core piece of the zero-waste model. “Being plugged in and being able to work through ways to improve that relevancy and that reliability with the people in Google who know how to build Google Chrome extensions, know how to use the AI tools on the backend, and deeply understand Search is super helpful.” The Google for Startups Accelerator: Circular Economy program also educated the team in how to selectively use AI tools such as Google’s Vision API Product Search versus building their own tech in-house.

“Having direct access to people at Google was really key for our development and sophisticated use of Google tools. And being a part of a cohort of other circular economy businesses was phenomenal for building connections in the same space,” says Sarah.

Google for Startups Accelerator support extended beyond tech. A program highlight for Sarah was a UX writing deep dive specifically for sustainability. “It showed us all this amazing, tangible research that Google has done about what is actually effective in terms of communicating around sustainability to drive behavior change,” said Sarah. “You can’t shame people into doing things. The way in which you communicate is really important in terms of if people will actually make a change or be receptive.”

Additionally, the new connections made with other circular economy startups and experts in their space was a huge benefit of participating in Google for Startups Accelerator. Mentorship, in particular, provided product-changing value. Google technical mentors shared advice that had a huge impact on the decision for Beni to move from utilizing Vision API Product Search to their own reverse image search. “Our mentors guided us to shift a core part of our technology. It was a big decision and was one of the biggest pieces of mentorship that helped drive us forward. This was a prime example of how the Google for Startups Accelerator program is truly here to support us in building the best products,” says Sarah.


What’s next for Beni

Beni’s mission is straightforward ‐ they’re easing the burden for shoppers to find and buy items second hand so that they can bring new people into resale and make resale the new norm.

Additionally, Beni is continuing to be built into a search platform, searching across second hand clothing. Beni offers their Chrome extension on desktop and mobile, and they will have a searchable interface. In addition to building out the platform further, Beni is looking at how they can support other e-commerce platforms and integrate resale into their offerings.

Learn about how to get involved in Google accelerator programs here.