Tag Archives: Google I/O 2024

Top 3 Updates with Compose across Form Factors at Google I/O ’24

Posted by Chris Arriola – Developer Relations Engineer

Google I/O 2024 was filled with lots of updates and announcements around helping you be more productive as a developer. Here are the top 3 announcements around Jetpack Compose and Form Factors from Google I/O 2024:

#1 New updates in Jetpack Compose

The June 2024 release of Jetpack Compose is packed with new features and improvements such as shared element transitions, lazy list item animations, and performance improvements across the board.

With shared element transitions, you can create delightful continuity between screens in your app. This feature works together with Navigation Compose and predictive back so that transitions can happen as users navigate your app. Another highly requested feature—lazy list item animations—is also now supported for lazy lists giving it the ability to animate inserts, deletions, and reordering of items.

Jetpack Compose also continues to improve runtime performance with every release. Our benchmarks show a faster time to first pixel of 17% in our Jetsnack Compose sample. Additionally, strong skipping mode graduated from experimental to production-ready status further improving the performance of Compose apps. Simply update your app to take advantage of these benefits.

Read What’s new in Jetpack Compose at I/O ‘24 for more information.


#2 Scaling across screens with new Compose APIs and Tools

During Google I/O, we announced new tools and APIs to make it easier to build across screens with Compose. The new Material 3 adaptive library introduces new APIs that allow you to implement common adaptive scenarios such as list-detail, and supporting pane. These APIs allow your app to display one or two panes depending on the available size for your app.

Watch Building UI with the Material 3 adaptive library and Building adaptive Android apps to learn more. If you prefer to read, you can check out About adaptive layouts in our documentation.

We also announced that Compose for TV 1.0.0 is now available in beta. The latest updates to Compose for TV include better performance, input support, and a whole range of improved components that look great out of the box. New in this release, we’ve added lists, navigation, chips, and settings screens. We’ve also added a new TV Material Catalog app and updated the developer tools in Android Studio to include a new project wizard to get a running start with Compose for TV.

Finally, Compose for Wear OS has added features such as SwipeToReveal, an expandableItem, and a range of WearPreview supporting annotations. During Google I/O 2024, Compose for Wear OS graduated visual improvements and fixes from beta to stable. Learn more about all the updates to Wear OS by checking out the technical session.

Check out case studies from SoundCloud and Adidas to see how apps are leveraging Compose to build their apps and learn more about all the updates for Compose across screens by reading more here!


#3 Glance 1.1

Jetpack Glance is Android’s modern recommended framework for building widgets. The latest version, Glance 1.1, is now stable. Glance is built on top of Jetpack Compose allowing you to use the same declarative syntax that you’re used to when building widgets.

This release brings a new unit test library, Error UIs, and new components. Additionally, we’ve released new Canonical Widget Layouts on GitHub to allow you to get started faster with a set of layouts that align with best practices and we’ve published new design guidance published on the UI design hub — check it out!

To learn more about using Glance, check out Build beautiful Android widgets with Jetpack Glance. Or if you want something more hands-on, check out the codelab Create a widget with Glance.


You can learn more about the latest updates to Compose and Form Factors by checking out the Compose Across Screens and the What’s new in Jetpack Compose at I/O ‘24 blog posts or watching the spotlight playlist!

Top 3 Updates for Building with AI on Android at Google I/O ‘24

Posted by Terence Zhang – Developer Relations Engineer

At Google I/O, we unveiled a vision of Android reimagined with AI at its core. As Android developers, you're at the forefront of this exciting shift. By embracing generative AI (Gen AI), you'll craft a new breed of Android apps that offer your users unparalleled experiences and delightful features.

Gemini models are powering new generative AI apps both over the cloud and directly on-device. You can now build with Gen AI using our most capable models over the Cloud with the Google AI client SDK or Vertex AI for Firebase in your Android apps. For on-device, Gemini Nano is our recommended model. We have also integrated Gen AI into developer tools - Gemini in Android Studio supercharges your developer productivity.

Let’s walk through the major announcements for AI on Android from this year's I/O sessions in more detail!

#1: Build AI apps leveraging cloud-based Gemini models

To kickstart your Gen AI journey, design the prompts for your use case with Google AI Studio. Once you are satisfied with your prompts, leverage the Gemini API directly into your app to access Google’s latest models such as Gemini 1.5 Pro and 1.5 Flash, both with one million token context windows (with two million available via waitlist for Gemini 1.5 Pro).

If you want to learn more about and experiment with the Gemini API, the Google AI SDK for Android is a great starting point. For integrating Gemini into your production app, consider using Vertex AI for Firebase (currently in Preview, with a full release planned for Fall 2024). This platform offers a streamlined way to build and deploy generative AI features.

We are also launching the first Gemini API Developer competition (terms and conditions apply). Now is the best time to build an app integrating the Gemini API and win incredible prizes! A custom Delorean, anyone?


#2: Use Gemini Nano for on-device Gen AI

While cloud-based models are highly capable, on-device inference enables offline inference, low latency responses, and ensures that data won’t leave the device.

At I/O, we announced that Gemini Nano will be getting multimodal capabilities, enabling devices to understand context beyond text – like sights, sounds, and spoken language. This will help power experiences like Talkback, helping people who are blind or have low vision interact with their devices via touch and spoken feedback. Gemini Nano with Multimodality will be available later this year, starting with Google Pixel devices.

We also shared more about AICore, a system service managing on-device foundation models, enabling Gemini Nano to run on-device inference. AICore provides developers with a streamlined API for running Gen AI workloads with almost no impact on the binary size while centralizing runtime, delivery, and critical safety components for Gemini Nano. This frees developers from having to maintain their own models, and allows many applications to share access to Gemini Nano on the same device.

Gemini Nano is already transforming key Google apps, including Messages and Recorder to enable Smart Compose and recording summarization capabilities respectively. Outside of Google apps, we're actively collaborating with developers who have compelling on-device Gen AI use cases and signed up for our Early Access Program (EAP), including Patreon, Grammarly, and Adobe.

Moving image of Gemini Nano operating in Adobe

Adobe is one of these trailblazers, and they are exploring Gemini Nano to enable on-device processing for part of its AI assistant in Acrobat, providing one-click summaries and allowing users to converse with documents. By strategically combining on-device and cloud-based Gen AI models, Adobe optimizes for performance, cost, and accessibility. Simpler tasks like summarization and suggesting initial questions are handled on-device, enabling offline access and cost savings. More complex tasks such as answering user queries are processed in the cloud, ensuring an efficient and seamless user experience.

This is just the beginning - later this year, we'll be investing heavily to enable and aim to launch with even more developers.

To learn more about building with Gen AI, check out the I/O talks Android on-device GenAI under the hood and Add Generative AI to your Android app with the Gemini API, along with our new documentation.


#3: Use Gemini in Android Studio to help you be more productive

Besides powering features directly in your app, we’ve also integrated Gemini into developer tools. Gemini in Android Studio is your Android coding companion, bringing the power of Gemini to your developer workflow. Thanks to your feedback since its preview as Studio Bot at last year’s Google I/O, we’ve evolved our models, expanded to over 200 countries and territories, and now include this experience in stable builds of Android Studio.

At Google I/O, we previewed a number of features available to try in the Android Studio Koala preview release, like natural-language code suggestions and AI-assisted analysis for App Quality Insights. We also shared an early preview of multimodal input using Gemini 1.5 Pro, allowing you to upload images as part of your AI queries — enabling Gemini to help you build fully functional compose UIs from a wireframe sketch.


You can read more about the updates here, and make sure to check out What’s new in Android development tools.

Top 3 Updates for Building Excellent Apps at Google I/O ‘24

Posted by Tram Bui, Developer Programs Engineer, Developer Relations

Google I/O 2024 was filled with the latest Android updates, equipping you with the knowledge and tools you need to build exceptional apps that delight users and stand out from the crowd.

Here are our top three announcements for building excellent apps from Google I/O 2024:

#1: Enhancing User Experience with Android 15

Android 15 introduces a suite of enhancements aimed at elevating the user experience:

    • Edge-to-Edge Display: Take advantage of the default edge-to-edge experience offered by Android 15. Design interfaces that seamlessly extend to the edges of the screen, optimizing screen real estate and creating an immersive visual experience for users.
    • Predictive Back: Predictive back can enhance navigation fluidity and intuitiveness. The system animations are no longer behind a Developer Option, which means users will be able to see helpful preview animations. Predictive back support is available for both Compose and Views.

#2: Stylus Support on Large Screens

Android's enhanced stylus support brings exciting capabilities:

    • Stylus Handwriting: Android now supports handwriting input in text fields for both Views and Compose. Users can seamlessly input text using their stylus without having to switch input methods, which can offer a more natural and intuitive writing experience.
    • Reduced Stylus Latency: To enhance the responsiveness of stylus interactions, Android introduces two new APIs designed to lower stylus latency. Android developers have seen great success with our low latency libraries, with Infinite Painter achieving a 5x reduction in latency from from 60-90 ms down to 8-16 ms.

#3: Wear OS 5: Watch Face Format, Conservation, and Performance

In the realm of Wear OS, we are focused on power conservation and performance enhancements:

    • Enhanced Watch Face Format: We've introduced improvements to the Watch Face Format, making it easier for developers to customize and optimize watch faces. These enhancements can enable the creation of more responsive, visually appealing watch faces that delight users.
    • Power Conservation: Wear OS 5 prioritizes power efficiency and battery conservation. Now available in developer preview along with a new emulator, you can leverage these improvements to create Wear OS apps that deliver exceptional battery life without compromising functionality.

There you have it— the top updates from Google I/O 2024 to help you build excellent apps. Excited to explore more? Check out the full playlist for deeper insights into these announcements and other exciting updates unveiled at Google I/O.

Home APIs: Enabling all developers to build for the home

Posted by Matt Van Der Staay – Engineering Director, Google Home


This blog was originally posted on Google for Developers.

As the saying goes, “home is where the heart is.” It’s where we spend the most time; it’s your space to be comfortable, where you can truly relax, connect and make memories. Our homes have gotten more helpful with connected products, such as a smart door lock or Nest thermostat. Despite this momentum, it's still too hard to develop for the home.

We are changing all of that. Building on the foundation of Matter, we've re-envisioned Google Home as a platform for developers - all developers, not just those that build smart home devices. Google Home is the destination to create innovative experiences for the home.

Today, we’re announcing the Home APIs and Home runtime. With the Home APIs, app developers can access over 600M devices, Google’s hubs and Matter infrastructure, and an automation engine powered by Google intelligence - all available on both Android and iOS. Here are five things to know:

1. Any developer can now build an experience that works with Google Home.

The home offers a unique opportunity for developers to create seamless and deeper relationships with users, but developing for the smart home is harder than it needs to be. Building for the smart home means integrations with many device makers, operating hubs and Matter fabrics, and operating automations engines driven by intelligent signals.

Whether you build an app specifically for smart home devices or build apps that have nothing to do with the smart home – like a fitness app or delivery app - the Home APIs will let you create app experiences that offer your customers delightful and differentiated experiences on both Android and iOS.

2. Access 600 million connected devices from your app

The new Device and Structure APIs let you access over 600M devices with a single integration. Control and manage the devices already connected to Google Home, such as Matter light bulbs or the Nest Learning Thermostat, whether at home, or on the go. You can build a complex app to manage any aspect of a smart home, or simply integrate with a smart device to solve pain points - like turning on the lights automatically before the food delivery driver arrives.

The Home APIs have been designed with privacy and security in mind, leveraging industry standard best practices. Users are always in control and need to explicitly grant access to their structure and smart home devices before an app can access it. And they can easily revoke access at any time from the Google Home app. To ensure quality experiences, developers who adopt the Home APIs must pass certification before launching their app.

The Device and Structure APIs
The Device and Structure APIs provide all of the foundational building blocks to create a smart home experience.

The new Commissioning API lets you setup Matter devices in your app or the Home app or directly with Fast Pair on Android, without the need to create a new Matter fabric, saving you time and resources.

The Commissioning API
The Commissioning API provides all of the customer experience to set up a Matter device.

3. Automate with Google’s unique intelligence about the home

As people add more devices to their home, it becomes challenging to make them all work in unison. Over the past year, we have added new signals and allowed those with advanced skills to script their home using generative AI. With the new Automation API, you can create and manage home automations in your app, using Google Home’s new automation engine and intelligent signals.

Automations can be triggered by device signals from the home such as occupancy events from motion sensors, mode changes from appliances, or media events from a smart TV. For example, Yale is using the Automation API to turn on the foyer lights when the front door is unlocked at night. Automations can also use Google’s intelligence signals like home and away, which fuses together signals from devices across the home to create a more accurate presence detection.

The Automations API
The Automations API provides all of the tools for creating and managing automations.

4. Expanding hubs for Google Home to the TV

A hub for Google Home is a device that enables remote access and local control of their Matter devices across Wifi and Thread. The Home APIs use the network of hubs for Google Home to control Matter devices whether the user is in the home or away.

Later this year, we’re upgrading our hubs and introducing the Home runtime, so other devices, including Chromecast with Google TV, select panel TVs with Google TV running Android 14, or higher and eligible LG brand TVs will also become hubs for Google Home.

Home APIs make controlling lights and switches locally over a hub feel snappy. We are adopting these APIs in the Google Home app, and our early tests show device control operating up to three times faster than before. Developers using the Home APIs can see faster and more responsive local control in their apps as well.

5. Delightful new experiences from a diverse set of apps

We are working with a broad range of brands across lighting, security, automotive, energy, and entertainment to build seamless smart home experiences that help get more usefulness from the smart home.

Partners from every major smart home category are building on the Home APIs.
Partners from every major smart home category are building on the Home APIs.

Here are how some of our first partners are using the Home APIs:

ADT’s new Trusted Neighbor will revolutionize the universal practice of “giving a trusted neighbor a key to your home,” enabling users to easily grant secure and temporary access to their homes for neighbors, friends or helpers.

ADT Trusted Neighbor Program

LG will enable millions of TVs to be hubs for Google Home, allowing seamless control of devices from any app built using Home APIs. You will also be able to use the ThinQ mobile app or the Home Hub on the LG TV to control devices.

Home APIs on LG TVs for Google Home

Eve Systems will bring their experience to Android for the first time and build helpful automations like lowering the blinds when the temperature drops at night.

Eve Systems using Home APIs

Google Pixel is bridging the digital and physical worlds so that bedtime mode can not only dim your screen, but can also automatically dim your bedroom lights, lower the shades and lock the front door.

Google Pixel using Home APIs

And this is just the beginning. With the Home APIs, a workout app could keep you cool while you are burning calories by turning on the fan before you begin working out. Or a vacation rental app could make sure that the lights are on and the temperature is just right when a guest arrives. With the Home APIs, now anyone can bridge digital experiences and physical devices.


Sign Up to Build with the Home APIs

Do you have a great idea or feature that you'd like to build into your app with the Home APIs? Tell us about it and join the waitlist for access to the Home APIs or Home runtime. We will expand access on a rolling basis and the first apps built on the Home APIs will come to the Play Store and App Store starting this fall. Learn more about what’s included in the Home APIs from our I/O session on the Google Home Developer Center.

Everything you need to know about Google TV and Android TV OS


Posted by Shobana Radhakrishnan – Senior Director of Engineering, Google TV, and Paul Lammertsma – Developer Relations Engineer

Over the past year, we’ve seen significant growth of Android TV OS, reaching 220 million monthly active devices with a 47% year-over-year increase. This incredible engagement would not be possible without our dedicated developer community. A massive thank you for your contributions.

Android 14 on TV

We’re bringing Android 14 to TV! The next generation of Android provides improvements in performance, sustainability, accessibility, and multitasking to help you build engaging apps for TVs.

  • Performance and sustainability — Android 14 for TV improves on previous OS versions so users get a snappier, more responsive TV experience. We’ve also added new energy modes to put users in control, helping to reduce a TV’s standby power consumption (see Energy saving image). Ensure your app integrates with MediaSession correctly to prevent content from continuing when input modes change or the panel switches off.
  • Accessibility — New features include color correction, enhanced text options, and improved navigation for users, which can all be toggled on or off using remote shortcuts. Review the accessibility best practices to make sure your app supports these features.
  • Multitasking Picture-in-picture mode is now supported on qualified Android 14 TV models. To evaluate whether a device supports the feature, query PackageManager for the picture-in-picture feature flag:
    hasSystemFeature(PackageManager.FEATURE_PICTURE_IN_PICTURE)


    For additional details, consult the updated Android TV app quality guidelines and the Android 14 for TV release notes.

    Compose for TV

    Compose for TV is now available in 1.0.0-beta01. We’ve updated the developer tools in Android Studio to include a new project wizard to give you a running start with Compose for TV.

    Here are just a few ways Compose makes it easier to build apps for TV:

      • Dedicated components for TV apps. Explore these components in our design guide or in practice by using our new TV Material Catalog app. Since the previous alpha release, we’ve added lists, navigation, chips, and settings screens.
      • Improved input support and performance. We’ve worked hard to address focus issues and ensure that the UI appears and animates smoothly.
      • Ease of implementation and extensive styling. Add components to your app and customize them with minimal code.
      • Broad form-factor support. Reuse business logic from your phone, tablet, or foldable app to render a TV UI with changes that can be as small as simply adding a ViewModel.

    Beta01 makes two big changes from alpha10:

      • Several components have graduated from experimental.
      • The ImmersiveList composable has been removed from the androidx-tv-material package.

    Carousel and chip components, such as FilterChip, are still experimental, so you’ll want to keep the @ExperimentalTvMaterial3Api annotation if you are using these components in your app. For all other components, you can now remove the @ExperimentalTvMaterial3Api annotation, since these APIs are now available in beta.

    We heard your feedback about the variety in the data types that represent content, which made it difficult to design a component in such a way that it would result in less code. If you are using the ImmersiveList composable from the alpha release, replace it with a custom implementation of an immersive list. While ImmersiveList is no longer part of Compose for TV, you can create an immersive list with just a few lines of code:

    @Composable
    fun SampleImmersiveList() {
        val selectedMovie = remember { mutableStateOf<Movie?>(null) }
    
    
        // Container
        Box(
            modifier = Modifier
                .fillMaxWidth()
                .height(400.dp)
        ) {
            // Background
            Box(
                modifier = Modifier
                    .fillMaxWidth()
                    .aspectRatio(20f / 7)
                    .background(selectedMovie.background)
            ) {}
    
    
            // Rows
            LazyRow(
                modifier = Modifier.align(Alignment.BottomEnd),
                ...
            ) {
                items(movies) { movie ->
                    MyMovieCard(
                        modifier = Modifier
                            .onFocusChanged {
                                if (it.hasFocus) {
                                    selectedMovie.value = movie
                                }
                            },
                        ...
                    ) {}
                }
            }
        }
    }
    

    A complete snippet is available in the immersive list sample.

    Also consult the comprehensive list of changes in the release notes to migrate any renamed or moved components.

    Migrate from the Leanback UI toolkit

    We recommend following our step-by-step migration guide to switch from Leanback to Compose for Android TV.

    Resources

    Whether you’re new to Compose or are in the process of migrating to Compose already, our large collection of resources are here to help you learn best practices for building TV UIs with the modern Android development toolkit, Jetpack Compose:

    Engage with the active Android developer community on Stack Overflow for any bugs you encounter, or submit the bugs through our public bug tracker.

    Thank you for your continued support of Android TV OS. We can’t wait to see what you’ll do on Google TV with the Android 14 TV OS!

Android for Cars: Bringing more apps to cars

Posted by Vivek Radhakrishnan – Technical Program Manager, and Seung Nam – Product Manager

With technology in cars becoming more capable, the opportunity to deliver safe and seamless connected experiences for drivers and passengers is greater than ever. Google remains committed to the automotive industry and is seeing momentum across Android Auto and cars powered by Android Automotive OS with Google built-in. We’re excited to share updates across our in-car experiences and introduce new programs and resources to make it easier for you to bring your apps to cars. Learn more below and in the Android for Cars Technical Session.

Momentum and updates

With over 200 million cars on the road compatible with Android Auto, and nearly 40 car models like the Nissan Rogue, Renault R5, Acura ZDX, and Ford Explorer offering Google built-in, the time to bring your apps to cars is now.

Over the last year, the ecosystem of apps available across these experiences has grown – thanks to you. New entertainment apps like Max, Peacock and Angry Birds are coming to select cars with Google built-in. On Android Auto, the Uber Driver app is now available, allowing drivers to accept rides and deliveries, and get turn-by-turn directions on a bigger screen.

Image showing Angry Birds on a Volvo EX90 car display
Angry Birds is coming to select cars with Google built-in, including Volvo EX90 (pictured).

We’re also pleased to share that Google Cast is coming to cars with Android Automotive OS, starting with Rivian with more to follow. This allows you to easily cast video content from your phone or tablet directly to the car while parked. If you don’t already offer casting in your app, this is a simple way for your content to reach new audiences in the car.

Coming soon - you can stream content from apps on your phone, like Pluto TV, to Rivian cars via Google Cast.

New car app quality tiers

There are unique considerations when developing apps and experiences for cars including safety, numerous screen sizes, and more. Our priority is developing resources and tools that take these considerations into account and minimize the work needed for you to bring your apps to cars.

We’re introducing new quality tiers, inspired by those that exist for large screens, to streamline the process of bringing existing apps to cars by highlighting what makes for a great user experience in cars. Here are the tiers and what they encompass:

    • Tier 1: Car differentiated
      This tier represents the best of what’s possible in cars. Apps in this tier are specifically built to work across the variety of hardware in cars and can adapt their experience across driving and parked modes. They provide the best user experience designed for the different screens in the car like the center console, instrument cluster and additional screens - like panoramic displays that we see in many premium vehicles.
    • Tier 2: Car optimized
      Most apps available in cars today fall into this tier and provide a great experience on the car’s center stack display. These apps will have some car-specific engineering to include capabilities that can be used across driving or parked modes, depending on the app’s category.
    • Tier 3: Car ready
      Apps in this tier are large screen compatible and are enabled while the car is parked, with potentially no additional work. While these apps may not have car-specific features, users can experience the app just as they would on any large screen Android device.

To learn more about the quality tiers, see Android app quality for cars.

Car ready mobile apps program

Let’s dive deep into Tier 3 apps. In collaboration with car manufacturers, we’re introducing the Car ready mobile apps program to accelerate bringing mobile apps to cars with no additional work for developers.

As part of this program, Google will proactively review mobile apps that are already adaptive and large screen compatible to ensure safety and compatibility in cars. If the app qualifies, we will automatically opt it in for distribution on cars with Google built-in and make it available in Android Auto, without the need for new development or a new release to be created. This program will start with parked app categories like video, gaming and browsers with plans to expand to other app categories in the future.

The program will roll out in the coming months, but if you already offer a large screen compatible adaptive app and it falls into one of these categories, you can request a review to participate sooner. As this program rolls out, availability of your app will depend on platform compatibility.

To learn more about building qualified mobile apps, check out the technical session titled “Building Adaptive Android Apps”. You can find guidance on what to look out for at developer.google.com

Animation showing AMC+ app on a phone, tablet and car display.
Apps optimized for large screens, like AMC+, may be able to come to cars with little to no development work.

New tools and emulators

To create high quality experiences in cars, we are also introducing some new tools that can help you along the way.

    • First, we have a new emulator for distant and panoramic displays so developers can visualize and test for the growing sizes and number of screens in the car and make sure apps can adapt to the variety of displays for the best experience.
    • We also have a new tool that addresses the wide range of screen shapes and user interfaces (UI) present in cars. Many new car displays have unique curves, insets and angles that impact the UI, so we have an emulator that lets you change the emulator screen to match OEM screen designs. This will help ensure the apps work well on real cars without needing to set up specific OEM emulators or bringing in real cars for testing.
    • Lastly, we’re introducing an Android Automotive OS system image for Pixel Tablet. This will let you physically interact with your app as you would on a car screen. We are opening this up for early access partners for the purpose of development and testing today, and you can request to participate here.

To learn more about how to use these tools, check out the “Build and test a parked app for Android Automotive OS” codelab that will be published tomorrow.

More app categories for cars

As you consider bringing your app to cars, we put together a table to help you understand what app categories are currently open and accepting app submissions across both Android Auto and cars with Google built-in. We will continue to expand the type of apps that can be enabled in cars, so if your app isn’t in one of these categories, stay tuned for future opportunities!

Android for Cars Catergory Status

Start developing apps for cars today

To learn how to bring your apps to cars, check out the documentation on the Android for Cars developer site and the Android for Cars Technical Session. With all the opportunities across car screens, there has never been a better time to bring your apps and experiences to cars. Thanks for all the contributions to the Android ecosystem. See you on the road!

Scaling Across Screens with Jetpack Compose @ Google I/O ‘24

Posted by Maru Ahues Bouza, Product Management Director, Android Developer

Scaling Across Screens with Jetpack Compose

The promise of Jetpack Compose has always been that a modern toolkit designed to build native UI can help you build better apps faster and easier. As more and more of you - 40% of the top 1k apps, in fact - use (and love) Compose, we’ve been working to extend those benefits you’re seeing on mobile to also help you build across form factors as well. At Google I/O 2024, we announced a lot of new updates for Compose that help you build across form factors, including Compose APIs to support adaptive layouts, and new updates for Compose TV and Wear OS. From foldables to wearables to TVs, Compose is delivering features built to make Android development faster and easier. Apps like yours are already using Compose to support more screens with less code.

When thinking about layouts - think adaptive

Yesterday, we announced a new set of Compose APIs for building adaptive layouts, using Material guidance. These APIs, now in Beta, provide new layouts and components that adapt as users expect when switching between small and large window sizes.

The libraries provide 3 new scaffolds that adapt to the different window sizes that users can place apps in on different types of devices, from phones to foldables to tablets and more.

3 new libraries that adapt to different window sizes

NavigationSuiteScaffold

NavigationSuiteScaffold helps make it easier to build navigation UI by automatically complying with Material guidelines to provide your users with an optimal experience based on their window size.

Material guidelines recommend using a navigation bar at the bottom of compact width windows such as most phones and a navigation rail on the size of medium width and expanded width windows. It used to be up to each app individually to handle swapping between these components; now NavigationSuiteScaffold does this for you by switching between the components when the window size changes.

Navigation bar

ListDetailPaneScaffold & SupportingPaneScaffold

The new library also has ListDetailPaneScaffold and SupportingPaneScaffold, which help you implement canonical layouts that we recommend in many cases - list-detail and supporting pane.

On a phone, you usually organize your app flow through screens. For example, clicking on an item on your list screen brings you to the detail screen.

Detaileds screen

When adapting to different window sizes, it helps to think of your app in terms of panes rather than screens. For a compact window size class, such as a phone, you might only display one pane. For an expanded window size class, you might show two, or more panes at the same time. ListDetailPaneScaffold and SupportingPaneScaffold help you build apps that easily switch between one and two pane layouts.

Different screen layouts

You can learn more about all three of these APIs and how to get started with them in the “Building UI with the Material 3 adaptive library” and “Building adaptive Android apps” technical sessions.

“Integrating SupportingPaneScaffold was effortless and quick. It enabled us to seamlessly organize primary and secondary content on To-Dos. Depending on the window size class, the supporting pane adjusts the UI without any additional custom logic. Delighting our users regardless of what device they use is a key priority for SAP Mobile Start.”
- Software Engineer on SAP Mobile Start

Compose for Wear OS

In the past year, adoption of Compose for Wear OS has grown 200%, showcasing the ease with which Compose allows developers to build for the watch form factor.

Recently we’ve seen top apps such as WhatsApp, Gmail and Google Calendar built entirely using Compose for Wear OS, and it’s the recommended way for building user interfaces for Wear OS apps.

At this year’s Google I/O, Compose for Wear OS is graduating visual improvements and fixes from beta to stable.

In the past year, we’ve added features such as SwipeToReveal, to give users additional means for completing actions, an expandableItem, to enhance the use of the smaller screen and show additional information where needed, and a range of WearPreview supporting annotations, for ensuring your app works optimally across the range of device sizes and font scales.

Compose for Wear OS previews usage in Android Studio
Compose for Wear OS previews usage in Android Studio

You can get started with Compose for Wear OS by taking the codelab and learn more about all the latest updates for Wear OS via the technical session.

Compose for Android TV

At Google I/O ‘24, we announced that Compose for TV 1.0.0 is now available in beta. Compose for TV is our recommended approach for building delightful UIs for Android TV OS. It brings all of the benefits of Jetpack Compose to your TV apps, making building beautiful and functional experiences in your app much faster and easier.

The latest updates to Compose for TV include better performance, input support, and a whole range of improved components that look great out of the box. New in this release, we’ve added lists, navigation, chips, and settings screens. We’ve also updated the developer tools in Android Studio to include a new project wizard to get a running start with Compose for TV.

The new TV Material Catalog app lets you explore components in Compose for TV with different themes and layouts, and our updated JetStream sample shows how it all fits together.

TV Material Catalog app in action

You can get started with Compose for TV by checking out the dedicated blog, the technical session or taking a look at the integration guides.

Jetpack Glance

Jetpack Glance 1.1.0 is now available in RC, bringing a new unit test library, Error UIs, and new components.

We have also released new Canonical Widget Layouts on GitHub, which are built on top of the Glance components, to allow you to get started faster with a set of layouts that align with best practices.

The first set of layouts are delivered as code samples and a matching figma design kit on Android UI Kit with more layouts coming later this year.

Lastly, we have new design guidance published on the UI design hub—check it out!

A sample of Compose across screens: Jetcaster

A sample of Compose across screens: Jetcaster

We have updated Jetcaster—one of our Compose samples—to adapt across phone, foldable and tablet screens, and added support for TV, Wear and homescreen widgets with Glance. Jetcaster showcases how Compose helps you to build across a range of devices using a shared architecture in a single project.

See how you can extract elements such as your data layer, and design system, to promote reuse and consistency while delivering an experience tailored to different form factors. You can dive directly into the code on GitHub.

Get started with Compose across screens

With these updates to Compose to help you build for tablets, foldables, wearables and TVs, it is a great time to get started! These technical sessions are a great place to learn more about all the latest updates:

Learn more about how SoundCloud supported more screens using 45% less code with Jetpack Compose!

"Our mobile Compose skills transferred directly to Compose for other form factors, The concepts and most APIs are the same across form factors” - Vitus Ortner, Android engineer at SoundCloud

What’s new in Wear OS – I/O ’24

Kseniia Shumelchyk, Android Developer Relations Engineer, and Garan Jenkin, Android Developer Relations Engineer

Wear OS has seen incredible growth and advancements over the past year. With watch launches from Pixel, Samsung and more, Wear OS grew its user base by 40% in 2023 and has users in over 160 countries and regions. And Wear OS has expanded to more brands including OnePlus, OPPO and Xiaomi. This growth has been accompanied by heavy investments in performance and power optimization.

In this blog post, we’ll be highlighting some of the key updates we announced at Google I/O this year, so let’s dive in and explore the latest advancements in Wear OS and how you can make the most of the platform.

Wear OS 5 Developer Preview

We’re excited to be releasing the Developer Preview of Wear OS 5, the next version of Google’s smartwatch platform arriving later this year, based on Android 14. Central to our release of Wear OS 5 is continuing to enhance battery life.

Wear OS 5 brings performance improvements over Wear OS 4. Tracking your workout is now more efficient; for example, running a marathon consumes up to 20% less power on Wear OS 5 than on Wear OS 4.

Wear OS 5 brings battery improvements over Wear OS 4 for longer work out tracking
Wear OS 5 brings battery improvements over Wear OS 4 for longer work out tracking

To help you develop power-efficient apps on Wear OS, we’ve released a new guide to conserve power and battery. Be sure to take a look!

Wear OS 5 is based on Android 14, which brings with it a number of developer-facing changes. Check out what’s changed and try the new Wear OS 5 emulator to test your app for compatibility with the new platform version.

Changes in Watch Faces development

Last year we introduced the Watch Face Format as part of Wear OS 4, and we’ve had a fantastic response, with 30% of watch faces in Google Play already using the format. It’s been great to see what you’ve all been able to create so far using the Watch Face Format!

Sample Watch faces created with Watch Face Format
Sample Watch faces created with Watch Face Format

We’re excited to bring you the next iteration of the Watch Face Format with Wear OS 5.

Additionally, we’re announcing some changes to existing watch face development using Jetpack Watch Face APIs. Starting from Wear OS 5, we are introducing restrictions to complications for watch faces built with AndroidX or the Wearable Support Library that will apply to some data sources, as well as Google Play publishing limitations to new watch faces built with these libraries.

Check out the Watch Faces blog post for full details on the new features in Watch Face Format and changes to watch faces development options.

Tooling and library updates

Jetpack Compose for Wear OS

Adoption of Compose on Wear OS has grown 200% in the past year, highlighting the ease with which Compose allows developers to build for the watch form factor. Recently we’ve seen top apps such as WhatsApp, Gmail and Google Calendar built entirely using Compose for Wear OS, and it’s the recommended way for building user interfaces for Wear OS apps.

With the 1.3 release of Jetpack Compose for Wear OS, we’ve graduated a number of visual improvements and fixes from beta to stable.

In the past year, we’ve added features such as SwipeToReveal, to give users additional means for completing actions, an expandable item, to enhance the use of the smaller screen and show additional information where needed, and a range of WearPreview supporting annotations, for ensuring your app works optimally across the range of device sizes and font scales.

Compose for Wear OS previews usage in Android Studio
Compose for Wear OS previews usage in Android Studio

And at Google I/O 2024, we announced a lot of new updates with Jetpack Compose that help you build across form factors, including Wear OS, read more in this blog and check out how SoundCloud supported more screens using 45% less code with Jetpack Compose.

Tiles and ProtoLayout

Wear OS tiles give users fast, predictable access to the information and actions they rely on most. Version 1.4 of the Jetpack Tiles library, currently in alpha, introduces preview support for Android Studio to help you quickly iterate on your Tile development while also helping you create optimal-looking tiles on a range of display sizes.

Previews can be seen starting in Android Studio Koala Feature Drop (Canary), with the following dependencies:

    • androidx.wear.tiles:tiles-tooling-preview:1.4.0-alpha02+
    • androidx.wear.tiles:tiles-tooling:1.4.0-alpha02+
    • androidx.wear:wear-tooling-preview:1.0.0+
@Preview(device = WearDevices.SMALL_ROUND)
fun smallPreview(context: Context) = TilePreviewData(
    onTileRequest = { request ->
        TilePreviewHelper.singleTimelineEntryTileBuilder(
            buildMyTileLayout()
        ).build()
    }
)
Tiles previews usage in Android Studio
Tiles previews usage in Android Studio

We’ve also introduced better means for your app to determine whether your tiles are in use, through the getActiveTilesAsync() method.

Within ProtoLayout’s stable version 1.1, as used by Tiles, we’ve introduced a number of changes, such as the following:

    • Gradient support in ArcLine.
    • Date-time formatting supports different time zones for dynamic data types.
    • Better text autosizing and ellipsizing options, and consistent font padding behavior.
    • Expandable spacers
    • Improved accessibility for Clickable elements

And from 1.2.0-alpha02, we’ve made it easier for your layouts to adjust appropriately for different display sizes by adding the setResponsiveContentInsetEnabled() method to PrimaryLayout, as well as updating it for EdgeContentLayout. To use this setter, update your code as follows:

PrimaryLayout.Builder(deviceParameters)
    .setResponsiveContentInsetEnabled(true)
    .setContent(
        // ...
    )
.build()

Easier testing for fitness apps

Android Studio Koala Feature Drop (Canary) brings a new sensor panel to make it easier to test use of Health Services in your Wear OS app. The panel allows you to configure capabilities of the device, set values of specific data types and stimulate events such as auto-pause and resume of exercises.

Sensor panel usage with Wear OS emulator in Android Studio
Sensor panel usage with Wear OS emulator in Android Studio

Check out this blog to learn more about tooling updates.

Larger Displays

With the momentum surrounding Wear OS, we’re seeing a wider variety of round screen sizes and resolutions, which provides more choices for the user.

We are releasing new guidelines on how to build responsive UIs for different watch display sizes, as well as updates to existing libraries to introduce adaptive layouts, and components.

Check out the ComposeStarter sample for Wear OS on Github to see how to take advantage of these updates in your app. Furthermore, we’ve updated the sample to provide examples of using tools to evaluate your layouts, including :

    • Previews - demonstrating use of WearPreviewDevices to visualize your layouts on a full range of device sizes and font scaling settings.
    • Screenshot testing - helping you detect issues and regressions in your layouts on different sized devices, with different font scales and locales, representative of real-world devices.

Start building for Wear OS now

There has never been a better time to start building for Wear OS! Be sure to check out Building for the future of Wear OS technical session to learn more about all the latest updates for Wear OS!

To get started:

We’re looking forward to seeing the experiences that you build on Wear OS!

Level up your apps with the latest features from Android Health

Posted by Breana Tate - Developer Relations Engineer, Android Health

Android Health’s mission is to enable billions of Android users to be healthier through access, storage, and control of their health, fitness, and safety data. To further this mission, we offer two primary APIs for developers, Health Connect and Health Services on Wear OS, which are both used by a growing number of apps on Android and Wear OS.

AI capabilities unlock amazing and unique use cases, but to be ready to deliver the most value to your users at the right time, you need a strong foundation of data. Our updates this year focus on helping you build up this data foundation, with support for more data types, new ways to access data, and additional methods of getting timely data updates when you need them.

Changes to the Google Fit APIs

We recently shared that Google Fit developer services will be transitioning to become a core part of the Android Health platform. As part of this, the Google Fit APIs, including the REST API, will remain available until June 30, 2025.

Health Connect is the recommended solution for storing and sharing health and fitness data on Android phones. Beginning with Android 14, it’s available by default in Settings. On pre-Android 14 devices, it’s available for download from the Play Store. Health Connect lets your app connect with hundreds of apps using a single API integration. To date, over 500 apps have integrated with Health Connect and have unlocked deeper insights for their users. Check out the featured list to see some of the apps that have integrated.

We’re excited to continue supporting the Google Fit Android Recording API functionality through the Recording API on mobile, which allows developers to record steps, and soon distance and calories, in a power-efficient manner. In contrast to the Google Fit Android Recording API, the Recording API on mobile does not store data in the cloud by default, and does not require Google Sign-In. The API is designed to make migrating from the Fit Recording API effortless. Keep an eye on d.android.com/health-and-fitness for upcoming documentation.

Upcoming capabilities from Health Connect

Health Connect will soon add support for background reads and history reads.

Background reads will enable developers to read data from Health Connect while their app is in the background, meaning that you can keep data up-to-date without relying on the user to open your app. This is a departure from current behavior, where apps can only read from Health Connect while the app is in the foreground or running a foreground service.

History reads will give users the option to grant apps access to all historical data in Health Connect, not just the past 30 days.

With both background reads and history reads, users are in control. Both capabilities require developers to declare the respective permissions, and users must approve the permission requests before developers can make use of the data protected by those permissions. Even after granting approval, users have the option of revoking access at any time from within Health Connect settings.

Both features will be released later this year, so stay tuned to learn how to add support to your apps!

Updates to Health Services on Wear OS

Health Services on Wear OS is a set of APIs that makes it simple to create power-efficient health and fitness experiences on Wear OS.

In Wear OS 5, we’re introducing 2 new features:

    • New data types for running
    • Support for debounced goals

New Data Types for Running

Starting with Wear OS 5, Health Services will support new data types for running. These data types can help provide additional insights on running form and economy.

The full list of new advanced running metrics is:

    • Ground Contact Time
    • Stride Length
    • Vertical Oscillation
    • Vertical Ratio

As with all data types supported by Health Services on Wear OS, be sure to check exercise capabilities so that your app only uses metrics that are supported on the devices running your app, creating a smoother experience for users. This is especially important for Wear OS, as there is a strong ecosystem of devices for consumers to choose from, and they don’t always support the same metrics.

// Checking if the device supports the RUNNING exercise and confirming the 
// data types that are supported.
suspend fun getExerciseCapabilities(): ExerciseTypeCapabilities? {
   val capabilities = exerciseClient.getCapabilitiesAsync().await()
   return if (ExerciseType.RUNNING in capabilities.supportedExerciseTypes) {
       capabilities.getExerciseTypeCapabilities(ExerciseType.RUNNING)
   } else {
       null
   }
}


. . .


// Checking whether the data types that we want to use are supported by
// the RUNNING exercise on this device.
val dataTypes = setOf(
   DataType.HEART_RATE_BPM_STATS,
   DataType.CALORIES_TOTAL,
   DataType.DISTANCE_TOTAL,
   DataType.GROUND_CONTACT_TIME,
   DataType.VERTICAL_OSCILLATION
).intersect(capabilities.supportedDataTypes)
Checking exercise capabilities with Health Services on Wear OS

To make this easy, we’ve introduced a sensor panel, available starting in Android Studio Koala Feature Drop, which is currently in Canary. You can use the panel to test your app across a variety of device capabilities, experimenting with situations where metrics like heart rate or distance aren’t available.

The Health Services sensor panel
The Health Services sensor panel

Support for debounced goals

Second, Health Services on Wear OS will soon support debounced goals for instantaneous metrics. These include metrics like heart rate, distance, and speed, for which users want to maintain a specified threshold or range throughout an exercise.

Debounced goals prevent the same event from being emitted multiple times—every time the condition is true—over a short time period. Instead, events are emitted only if the threshold has been continuously exceeded for a (configurable) number of seconds. You can also prevent events from being emitted immediately after goal registration.

This support comes from two new ways to better time goal alerts for instantaneous metrics: duration at threshold and initial delay:

    • Duration at threshold is the amount of uninterrupted time the user needs to cross the specified threshold before Health Services sends an alert event.
    • Initial delay is the amount of time that must pass, since goal registration, before your app is notified.

Together, these features reduce the number of false positives and repeated alerts surfaced to users if your app lets users set fitness goals or targets.

Duration at Threshold

Initial Delay

Definition

The amount of uninterrupted time the user needs to cross the specified threshold before Health Services will send an alert event.

The amount of time that must pass since goal registration, before your app is notified.

Purpose

Prevent false positives.

Prevent repeatedly notifying the user.

Counter starts

As soon as user crosses the specified threshold

As soon as the monitoring request is set

The differences between Duration at Threshold and Initial Delay

A common use case for debounced goals involves heart rate zones. Heart rate continuously fluctuates throughout an exercise, especially during cardio-intensive activities. Without support for debouncing, an app might get many alerts in a short period of time, such as each time the user’s heart rate dips above or below the target range.

By introducing an initial delay, you can inform Health Services to send a goal alert only after a specified time period has passed–think of this like an adjustment period. And by introducing a duration at threshold, you can take this customization further, by specifying the amount of time that must pass in (or out) of the specified threshold for the goal to be activated. In practice, this would be like waiting for the user to be out of their target heart rate range for 15 seconds before your app lets them know to increase or decrease their intensity.

Check out the technical session, “Building Adaptable Experiences with Android Health” to see this in action!

Your app’s training partner

The Health & Fitness Developer Center is your one-stop-shop for building health & fitness apps on Android! Visit the site for documentation, design inspiration, case studies, and more to learn how to build apps on mobile and Wear OS.

We’re excited to see the Health and Fitness experiences you continue to build on Android!

Latest updates for watch faces on Wear OS

Posted by Anna Bernbaum – Product Manager, and Garan Jenkin – Developer Relations Engineer

At last year’s Google I/O, we launched the Watch Face Format for Wear OS. This year, as part of our continued partnership with Samsung, we are excited to share some new features that you can use to create exciting new watch face designs! These features are now supported in XML definitions, and later in the year, you’ll also see an update to Watch Face Studio to take advantage of them.

The Watch Face Format is the recommended way to create watch faces for Wear OS. The format makes it easier to create customizable and more power-efficient watch faces for devices that run Wear OS 4 or higher. The Watch Face Format is a declarative XML format, so there is no executable code involved in creating a watch face, and there is no code embedded in your watch face APK.

Additionally, in our move toward the Watch Face Format for watch face creation, we have also made some changes to watch face development.

New features in the Watch Face Format

Flavors

Flavors represent preset configurations for your watch face, available in the companion app:

Watch gallery

They allow the watch face developer to configure useful and attractive combinations of the watch face’s configuration options, and allow the user to visualize and select from these with ease.

We’ve now brought flavors to the Watch Face Format. For a full guide on adding them to your watch face, see the flavors reference.

Complications

We’re adding support for both “goal progress” and “weighted elements” complication types to the Watch Face Format:

circle chart with data saying 60% of goal progress and weight elements circle chart

    • Goal progress is perfect for data where the user has a target, but that target can be exceeded. A good example is step count.
    • Weighted elements can represent discrete subsets of data, showing their relative sizes, where you might otherwise use something like a pie chart.

Both of these complication types can be accessed through the [COMPLICATION.*] expression object. For full details, see the complication guidance.

Weather

Knowing at-a-glance what the weather will be like for the next hour, day, and beyond can make all the difference to a user’s plans! Unsurprisingly, having weather data as a data source in the Watch Face Format has been a common request, and we’re delighted to be able to introduce it in this latest version. You’ll now be able make watch faces like this:

circle chart with data saying 60% of goal progress and weight elements circle chart

Weather Basics

Weather in the Watch Face Format is accessed via the [WEATHER.*] expression object. You can use it in Condition and text Template statements and anywhere where expressions are supported.

For example, to show the current weather condition, use this template and expression:

<Template>Current weather conditions: %s
    <Parameter expression="[WEATHER.CONDITION_NAME]"/>
</Template>

The weather provider in the Watch Face Format supports a range of different metric types for the current day, including the following:

    • Current conditions
    • Temperature - current, minimum (low), and maximum (high)
    • UV index
    • Chance of rain

For the full range of data types and conditions, see the weather guide.

Forecasts

In addition to the current weather, you can access forecast data, both by hour and by day. For example, to access the forecast maximum temperature for tomorrow, use a template and set of expressions similar to the following:

<Template>Tomorrow max temp: %d°%s
    <Parameter expression="[WEATHER.DAYS.1.TEMPERATURE_HIGH]" />
    <Parameter expression="[WEATHER.TEMPERATURE_UNIT] == 1 ? &quot;C&quot; : &quot;F&quot;" />
</Template>

When using weather in the Watch Face Format, there are some further details to be aware of, such as checking for forecast availability or loading errors. For all of this and more, take a look at the weather guide.

Changes to Watch Face development

As we gather momentum behind the Watch Face Format, we’re announcing some changes to existing watch face development options.

We announced recently that only some complications will be available on Wear OS 5, for watch faces built with AndroidX or the Wearable Support Library. This restriction does not apply to watch faces that use the Watch Face Format.

Additionally, starting in early 2025 (specific date to be announced in Q4 2024), all new watch faces published on Google Play must use the Watch Face Format. Existing watch faces that use other libraries, such as AndroidX or the Wearable Support Library, can continue to receive updates without transitioning to the new format.

New resources

To make it easier to create watch faces using the Watch Face Format, we’ve published some more resources on GitHub.

You now have full access to the XSD specification, to help you build your own watch face generating tools.

We’ve also provided validators to check your XML for correctness and memory usage. These are the same checks run by Google Play, so it allows you to run these checks even before you submit your watch face for publishing.

Learn more

Get started with the latest version of the Watch Face Format.

Be sure to check out Building for the future of Wear OS technical session and What’s new in Wear OS at I/O 2024 blog post to learn more about all the latest updates for Wear OS!

Code snippets license:

Copyright 2023 Google LLC.
SPDX-License-Identifier: Apache-2.0