Tag Archives: Jetpack

Jetpack WindowManager 1.1 is stable!

Posted by Francesco Romano, Developer Relations Engineer on Android

It’s been more than a year since the release of the Jetpack WindowManager 1.0 stable version, and many things have happened in the foldables and large screen space. Many new devices have entered the market, and many new use cases have been unlocked!

Jetpack WindowManager is one of the most important libraries for optimizing your Android app for different form factors. And this release is a major milestone that includes a number of new features and improvements.

Let’s recap all the use cases covered by the Jetpack WindowManager library.

Get window metrics (and size classes!)

Historically, developers relied on the device display size to decide the layout of their apps, but with the availability of different form factors (such as foldables) and display modes (such as multi-window and multi-display) information about the size of the app window rather than the device display has become essential.

The Jetpack WindowManager WindowMetricsCalculator interface provides the source of truth to measure how much screen space is currently available for your app.

Built on top of that, the window size classes are a set of opinionated viewport breakpoints that help you design, develop, and test responsive and adaptive application layouts. The breakpoints have been chosen specifically to balance layout simplicity with the flexibility to optimize your app for unique cases.

With Jetpack Compose, use window size classes by importing them from the androidx.compose.material3 library, which uses WindowMetricsCalculator internally.

For View-based app, you can use the following code snippet to compute the window size classes:

private fun computeWindowSizeClasses() { val metrics = WindowMetricsCalculator.getOrCreate() .computeCurrentWindowMetrics(this) val widthDp = metrics.bounds.width() / resources.displayMetrics.density val widthWindowSizeClass = when { widthDp < 600f -> WindowSizeClass.COMPACT widthDp < 840f -> WindowSizeClass.MEDIUM else -> WindowSizeClass.EXPANDED } val heightDp = metrics.bounds.height() / resources.displayMetrics.density val heightWindowSizeClass = when { heightDp < 480f -> WindowSizeClass.COMPACT heightDp < 900f -> WindowSizeClass.MEDIUM else -> WindowSizeClass.EXPANDED } }

To learn more, see our Support different screen sizes developer guide.

Make your app fold aware

Jetpack WindowManager also provides all the APIs you need to optimize the layout for foldable devices.

In particular, use WindowInfoTracker to query FoldingFeature information, such as:

  • state: The folded state of the device, FLAT or HALF_OPENED
  • orientation: The orientation of the fold or device hinge, HORIZONTAL or VERTICAL
  • occlusion type: Whether the fold or hinge conceals part of the display, NONE or FULL
  • is separating: Whether the fold or hinge creates two logical display areas, true or false
  • bounds: The bounding rectangle of the feature within the application window (inherited from DisplayFeature)

You can access this data through a Flow:

override fun onCreate(savedInstanceState: Bundle?) { ... lifecycleScope.launch(Dispatchers.Main) { lifecycle.repeatOnLifecycle(Lifecycle.State.STARTED) { WindowInfoTracker.getOrCreate(this@MainActivity) .windowLayoutInfo(this@MainActivity) .collect { layoutInfo -> // New posture information val foldingFeature = layoutInfo.displayFeatures // use the folding feature to update the layout } } } }

Once you collect the FoldingFeature info, you can use the data to create an optimized layout for the current device state, for example, by implementing tabletop mode! You can see a tabletop mode example in MediaPlayerActivity.kt.

A great place to start learning about foldables is our codelab: Support foldable and dual-screen devices with Jetpack WindowManager.

Show two Activities side by side

Last, but not least, you can use the latest stable Jetpack WindowManager API: activity embedding.

Available since Android 12L, activity embedding enables developers with legacy multi-activiity architectures to display multiple activities from the same application—or even from multiple applications—side-by-side on large screens.

It’s a great way to implement list-detail layouts with minimal or no code changes.

Note: Modern Android Development (MAD) recommends using a single-activity architecture based on Jetpack APIs, including Jetpack Compose. If your app uses fragments, check out SlidingPaneLayout. Activity embedding is designed for multiple-activity, legacy apps that can't be easily updated to MAD.

It is also the biggest change in the library, as the activity embedding APIs are now stable in 1.1!

Not only that, but the API is now richer in features, as it enables you to:

  • Modify the behavior of the split screen (split ratio, rules, finishing behavior)
  • Define placeholders
  • Check (and change) the split state at runtime
  • Implement horizontal splits
  • Start a modal in full window

Interested in exploring activity embedding? We’ve got you covered with a dedicated codelab: Build a list-detail layout with activity embedding.

Many apps are already using activity embedding in production, for example, WhatsApp:

Image of WhatsApp on a large screen device showing activity embedding

And ebay!

Image of Ebay on a large screen device showing activity embedding

Implementing list-details layouts with multiple activities is not the only use case of activity embedding!

Starting from Android 13 (API level 33), apps can embed activities from other apps.

Cross‑application activity embedding enables visual integration of activities from multiple Android applications. The system displays an activity of the host app and an embedded activity from another app on screen side by side or top and bottom, just as in single-app activity embedding.

Host apps implement cross-app activity embedding the same way they implement single-app activity embedding, but the embedded app must opt-in for security reasons.

You can learn more about cross-application embedding in the Activity embedding developer guide.

Conclusion

Jetpack WindowManager is one of the most important libraries you should learn if you want to optimize your app’s user experience for different form factors.

WindowManager is also adding new, interesting features with every release, so keep an eye out for what’s coming in version 1.2.

See the Jetpack WindowManager documentation and sample app to get started with WindowManager today!

CameraX 1.3 is now in Beta

Posted by Donovan McMurray, Camera Developer Relations Engineer

CameraX, the Android Jetpack camera library which helps you create a best-in-class experience that works consistently across Android versions and devices, is becoming even more helpful with its 1.3 release. CameraX is already used in a growing number of Android apps, encompassing a wide range of use cases from straightforward and performant camera interactions to advanced image processing and beyond.

CameraX 1.3 opens up even more advanced capabilities. With the dual concurrent camera feature, apps can operate two cameras at the same time. Additionally, 1.3 makes it simple to delight users with new HDR video capabilities. You can also now add graphics library transformations (for example, with OpenGL or Vulkan) to the Preview, ImageCapture, and VideoCapture UseCases to apply filters and effects. There are also many other video improvements.

CameraX version 1.3 is officially in Beta as of today, so let’s get right into the details!

Dual concurrent camera

CameraX makes complex camera functionality easy to use, and the new dual concurrent camera feature is no exception. CameraX handles the low-level details like ensuring the concurrent camera streams are opened and closed in the correct order. In CameraX, binding dual concurrent cameras is not that different from binding a single camera.

First, check which cameras support a concurrent connection with getAvailableConcurrentCameraInfos(). A common scenario is to select a front-facing and a back-facing camera.

var primaryCameraSelector: CameraSelector? = null var secondaryCameraSelector: CameraSelector? = null for (cameraInfos in cameraProvider.availableConcurrentCameraInfos) { primaryCameraSelector = cameraInfos.first { it.lensFacing == CameraSelector.LENS_FACING_FRONT }.cameraSelector secondaryCameraSelector = cameraInfos.first { it.lensFacing == CameraSelector.LENS_FACING_BACK }.cameraSelector if (primaryCameraSelector == null || secondaryCameraSelector == null) { // If either a primary or secondary selector wasn't found, reset both // to move on to the next list of CameraInfos. primaryCameraSelector = null secondaryCameraSelector = null } else { // If both primary and secondary camera selectors were found, we can // conclude the search. break } } if (primaryCameraSelector == null || secondaryCameraSelector == null) { // Front and back concurrent camera not available. Handle accordingly. }

Then, create a SingleCameraConfig for each camera, passing in each camera selector from before, along with your UseCaseGroup and LifecycleOwner. Then call bindToLifecycle() on your CameraProvider with both SingleCameraConfigs in a list.


val primary = ConcurrentCamera.SingleCameraConfig( primaryCameraSelector, useCaseGroup, lifecycleOwner ) val secondary = ConcurrentCamera.SingleCameraConfig( secondaryCameraSelector, useCaseGroup, lifecycleOwner ) val concurrentCamera = cameraProvider.bindToLifecycle( listOf(primary, secondary) )

For compatibility reasons, dual concurrent camera supports each camera being bound to 2 or fewer UseCases with a maximum resolution of 720p or 1440p, depending on the device.

HDR video

CameraX 1.3 also adds support for 10-bit video streaming along with HDR profiles, giving you the ability to capture video with greater detail, color and contrast than previously available. You can use the VideoCapture.Builder.setDynamicRange() method to set a number of configurations. There are several pre-configured values:

  • HLG_10_BIT - A 10-bit high-dynamic range with HLG encoding.This is the recommended HDR encoding to use because every device that supports HDR capture will support HLG10. See the Check for HDR support guide for details.
  • HDR10_10_BIT - A 10-bit high-dynamic range with HDR10 encoding.
  • HDR10_PLUS_10_BIT - A 10-bit high-dynamic range with HDR10+ encoding.
  • DOLBY_VISION_10_BIT - A 10-bit high-dynamic range with Dolby Vision encoding.
  • DOLBY_VISION_8_BIT - An 8-bit high-dynamic range with Dolby Vision encoding.

First, loop through the available CameraInfos to find the first one that supports HDR. You can add additional camera selection criteria here.

var supportedHdrEncoding: DynamicRange? = null val hdrCameraInfo = cameraProvider.availableCameraInfos .first { cameraInfo -> val videoCapabilities = Recorder.getVideoCapabilities(cameraInfo) val supportedDynamicRanges = videoCapabilities.getSupportedDynamicRanges() supportedHdrEncoding = supportedDynamicRanges.firstOrNull { it != DynamicRange.SDR // Ensure an HDR encoding is found } return@first supportedDynamicRanges != null } var cameraSelector = hdrCameraInfo?.cameraSelector ?: CameraSelector.DEFAULT_BACK_CAMERA

Then, set up a Recorder and a VideoCapture UseCase. If you found a supportedHdrEncoding earlier, also call setDynamicRange() to turn on HDR in your camera app.


// Create a Recorder with Quality.HIGHEST, which will select the highest // resolution compatible with the chosen DynamicRange. val recorder = Recorder.Builder() .setQualitySelector(QualitySelector.from(Quality.HIGHEST)) .build() val videoCaptureBuilder = VideoCapture.Builder(recorder) if (supportedHdrEncoding != null) { videoCaptureBuilder.setDynamicRange(supportedHdrEncoding!!) } val videoCapture = videoCaptureBuilder.build()

Effects

While CameraX makes many camera tasks easy, it also provides hooks to accomplish advanced or custom functionality. The new effects methods enable custom graphics library transformations to be applied to frames for Preview, ImageCapture, and VideoCapture.

You can define a CameraEffect to inject code into the CameraX pipeline and apply visual effects, such as a custom portrait effect. When creating your own CameraEffect via the constructor, you must specify which use cases to target (from PREVIEWVIDEO_CAPTURE, and IMAGE_CAPTURE). You must also specify a SurfaceProcessor to implement a GPU effect for the underlying Surface. It's recommended to use graphics API such as OpenGL or Vulkan to access the Surface. This process will block the Executor associated with the ImageCapture. An internal I/O thread is used by default, or you can set one with ImageCapture.Builder.setIoExecutor(). Note: It’s the implementation’s responsibility to be performant. For a 30fps input, each frame should be processed under 30 ms to avoid frame drops.

There is an alternative CameraEffect constructor for processing still images, since higher latency is more acceptable when processing a single image. For this constructor, you pass in an ImageProcessor, implementing the process method to return an image as detailed in the ImageProcessor.Request.getInputImage() method.

Once you’ve defined one or more CameraEffects, you can add them to your CameraX setup. If you’re using a CameraProvider, you should call UseCaseGroup.Builder.addEffect() for each CameraEffect, then build the UseCaseGroup, and pass it in to bindToLifecycle(). If you’re using a CameraController, you should pass all of our CameraEffects into setEffects().

Additional video features

CameraX 1.3 has many additional highly-requested video features that we’re excited to add support for.

With VideoCapture.Builder.setMirrorMode(), you can control when video recordings are reflected horizontally. You can set MIRROR_MODE_OFF (the default), MIRROR_MODE_ON, and MIRROR_MODE_ON_FRONT_ONLY (useful for matching the mirror state of the Preview, which is mirrored on front-facing cameras). Note: in an app that only uses the front-facing camera, MIRROR_MODE_ON and MIRROR_MODE_ON_FRONT_ONLY are equivalent.

PendingRecording.asPersistentRecording() method prevents a video from being stopped by lifecycle events or the explicit unbinding of a VideoCapture use case that the recording's Recorder is attached to. This is useful if you want to bind to a different camera and continue the video recording with that camera. When this option is enabled, you must explicitly call Recording.stop() or Recording.close() to end the recording.

For videos that are set to record audio via PendingRecording.withAudioEnabled(), you can now call Recording.mute() while the recording is in progress. Pass in a boolean to specify whether to mute or unmute the audio, and CameraX will insert silence during the muted portions to ensure the audio stays aligned with the video.

AudioStats now has a getAudioAmplitude() method, which is perfect for showing a visual indicator to users that audio is being recorded. While a video recording is in progress, each VideoRecordEvent can be used to access RecordingStats, which in turn contains the AudioStats object.

Next steps

Check the full release notes for CameraX 1.3 for more details on the features described here and more! If you’re ready to try out CameraX 1.3, update your project’s CameraX dependency to 1.3.0-beta01 (or the latest version at the time you’re reading this).

If you would like to provide feedback on any of these features or CameraX in general, please create a CameraX issue. As always, you can also reach out on our CameraX Discussion Group.

What’s new in Jetpack Compose

Posted by Jolanda Verhoef, Android Developer Relations Engineer

It has been almost two years since we launched the first stable version of Jetpack Compose, and since then, we’ve seen its adoption and feature set grow spectacularly. Whether you write an application for smartphones, foldables, tablets, ChromeOS devices, smartwatches, or TVs, Compose has got you covered! We recommend you to use Compose for all new Wear OS, phone and large-screen apps. With new tooling and library features, extended Material Design 3, large screen, and Wear OS support, and alpha versions of Compose for homescreen widgets and TV… This is an exciting time!

Compose in the community

In the last year, we’ve seen many companies investigating and choosing Compose to build new features and migrate screens in their production applications. 24% of the top 1000 apps on Google Play have already chosen to adopt Compose! For example, Dropbox engineers told us that they rewrote their search experience in Compose in just a few weeks, which was 40% less time than anticipated, and less than half the time it took the team to build the feature on iOS. They also shared that they were interested in adopting Compose “because of its first-class support for design systems and tooling support”. Our Google Drive team cut their development time nearly in half when using Compose combined with architecture improvements.

It’s great to see how these teams experience faster development cycles, and also feel their UI code is more testable. Inspired? Start by reading our guide How to Adopt Compose for your Team, which outlines how and where to start, and shows the areas of development where Compose can bring huge added value.


Library features & development

Since we released the first Compose Bill of Materials in October last year, we’ve been working on new features, bug fixes, performance improvements, and bringing Compose to everywhere you build UI: phones, tablets, foldables, watches, TV, and your home screen. You can find all changes in the May 2023 release and the latest alpha versions of the Compose libraries.

We’ve heard from you that performance is something you care about, and that it’s not always clear how to create performant Compose applications. We’re continuously improving the performance of Compose. For example, as of last October, we started migrating modifiers to a new and more efficient system, and we’re starting to see the results of that migration. For text alone, this work resulted in an average 22% performance gain that can be seen in the latest alpha release, and these improvements apply across the board. To get these benefits in your app, all you have to do is update your Compose version!

Text and TextField got many upgrades in the past months. Next to the performance improvements we already mentioned, Compose now supports the latest emoji version 🫶 and includes new text features such as outlining text, hyphenation support, and configuring line breaking behavior. Read more in the release notes of the compose-foundation and compose-ui libraries.

The new pager component allows you to horizontally or vertically flip through content, which is similar to ViewPager2 in Views. It allows deep customization options, making it possible to create visually stunning effects:

Moving image showing Hoizontal Pager composable
Choose a song using the HorizontalPager composable. Learn how to implement this and other fancy effects in Rebecca Franks' blog post.

The new flow layouts FlowRow and FlowColumn make it easy to arrange content in a vertical or horizontal flow, much like lines of text in a paragraph. They also enable dynamic sizing using weights to distribute the items across the container.

Image of search filters in a real estate app created with flow layouts
Using flow layouts to show the search filters in a real estate app

To learn more about the new features, performance improvements, and bug fixes, see the release notes of the latest stable and newest alpha release of the Compose libraries.

Tools

Developing your app using Jetpack Compose is much easier with the new and improved tools around it. We added tons of new features to Android Studio to improve your workflow and efficiency. Here are some highlights:

Android Studio Flamingo is the latest stable release, bringing you:

  • Project templates that use Compose and Material 3 by default, reflecting our recommended practices.
  • Material You dynamic colors in Compose previews to quickly see how your composable responds to differently colored wallpapers on a user device.
  • Compose functions in system traces when you use the System Trace profiler to help you understand which Compose functions are being recomposed.

Android Studio Giraffe is the latest beta release, containing features such as:

  • Live Edit, allowing you to quickly iterate on your code on emulator or physical device without rebuilding or redeploying your app.
  • Support for new animations APIs in Animation preview so you can debug any animations using animate*AsStateCrossFaderememberInfiniteTransition, and AnimatedContent.
  • Compose Preview now supports live updates across multiple files, for example, if you make a change in your Theme.kt file, you can see all Previews updates automatically in your UI files.
  • Improving auto-complete behavior. For example, we now show icon previews when you’re adding Material icons, and we keep the @Composable annotation when running “Implement Members".

Android Studio Hedgehog contains canary features such as:

  • Showing Compose state information in the debugger. While debugging your app, the debugger will tell you exactly which parameters have “Changed” or have remained “Unchanged”, so you can more efficiently investigate the cause of the recomposition.
  • You can try out the new Studio Bot, an experimental AI powered conversational experience in Android Studio to help you generate code, fix issues, and learn about best practices, including all things Compose. This is an early experiment, but we would love for you to give it a try!
  • Emulator support for the newly announced Pixel Fold and Tablet Virtual Devices, so that you can test your Compose app before these devices launch later this year.
  • A new Espresso Device API that lets you apply rotation changes, folds, and other synchronous configuration changes to your virtual devices under test.

We’re also actively working on visual linting and accessibility checks for previews so you can automatically audit your Compose UI and check for issues across different screen sizes, and on multipreview templates to help you quickly add common sets of previews.

Material 3

Material 3 is the recommended design system for Android apps, and the latest 1.1 stable release adds a lot of great new features. We added new components like bottom sheets, date and time pickers, search bars, tooltips, and others. We also graduated many of the core components to stable, added more motion and interaction support, and included edge-to-edge support in many components. Watch this video to learn how to implement Material You in your app:


Extending Compose to more surfaces

We want Compose to be the programming model for UI wherever you run Android. This means including first-class support for large screens such as foldables and tablets and publishing libraries that make it possible to use Compose to write your homescreen widgets, smartwatch apps, and TV applications.

Large screen support

We’ve continued our efforts to make development for large screens easy when you use Compose. The pager and flow layouts that we released are common patterns on large screen devices. In addition, we added a new Compose library that lets you observe the device’s window size class so you can easily build adaptive UI.

When attaching a mouse to an Android device, Compose now correctly changes the mouse cursor to a caret when you hover the cursor over text fields or selectable text. This helps the user to understand what elements on screen they can interact with.

Moving image of Compose adjusting the mouse cursor to a caret when the mouse is hovering over text field

Glance

Today we publish the first beta version of the Jetpack Glance library! Glance lets you develop widgets optimized for Android phone, tablet, and foldable homescreens using Jetpack Compose. The library gives you the latest Android widget improvements out of the box, using Kotlin and Compose:

  • Glance simplifies the implementation of interactive widgets, so you can showcase your app’s top features, right on a user’s home screen.
  • Glance makes it easy to build responsive widgets that look great across form factors.
  • Glance enables faster UI Iteration with your designers, ensuring a high quality user experience.
Image of search filters in a real estate app created with flow layouts

Wear OS

We launched Compose for Wear OS 1.1 stable last December, and we’re working hard on the new 1.2 release which is currently in alpha. Here’s some of the highlights of the continuous improvements and new features that we are bringing to your wrist:

  • The placeholder and placeholderShimmer add elegant loading animations that can be used on chips and cards while content is loading.
  • expandableItems make it possible to fold long lists or long text, and only expand to show their full length upon user interaction.
  • Rotary input enhancements available in Horologist add intuitive snap and fling behaviors when a user is navigating lists with rotary input.
  • Android Studio now lets you preview multiple watch screen and text sizes while building a Compose app. Use the Annotations that we have added here.

Compose for TV

You can now build pixel perfect living room experiences with the alpha release of Compose for TV! With the new AndroidX TV library, you can apply all of the benefits of Compose to the unique requirements for Android TV. We worked closely with the community to build an intuitive API with powerful capabilities. Engineers from Soundcloud shared with us that “thanks to Compose for TV, we are able to reuse components and move much faster than the old Leanback View APIs would have ever allowed us to.” And Plex shared that “TV focus and scrolling support on Compose has greatly improved our developer productivity and app performance.”

Compose for TV comes with a variety of components such as ImmersiveList and Carousel that are specifically optimized for the living room experience. With just a few lines of code, you can create great TV UIs.

Moving image of TVLazyGrid on a screen

TvLazyColumn {   items(contentList) { content ->     TvLazyRow { items(content) { cardItem -> Card(cardItem) }   } }

Learn more about the release in this blog post, check out the “What’s new with TV and intro to Compose” talk, or see the TV documentation!

Compose support in other libraries

It’s great to see more and more internally and externally developed libraries add support for Compose. For example, loading pictures asynchronously can now be done with the GlideImage composable from the Glide library. And Google Maps released a library which makes it much easier to declaratively create your map implementations.

GoogleMap( //... ) { Marker( state = MarkerState(position = LatLng(-34, 151)), title = "Marker in Sydney" ) Marker( state = MarkerState(position = LatLng(35.66, 139.6)), title = "Marker in Tokyo" ) }

New and updated guidance

No matter where you are in your learning journey, we’ve got you covered! We added and revamped a lot of the guidance on Compose:

Happy Composing!

We hope you're as excited by these developments as we are! If you haven't started yet, it's time to learn Jetpack Compose and see how your team and development process can benefit from it. Get ready for improved velocity and productivity. Happy Composing!

Media transcoding and editing, transform and roll out!

Posted by Andrew Lewis - Software Engineer, Android Media Solutions

The creation of user-generated content is on the rise, and users are looking for more ways to personalize and add uniqueness to their creations. These creations are then shared to a vast network of devices, each with its own capabilities. The Jetpack Media3 1.0 release includes new functionality in the Transformer module for converting media files between formats, or transcoding, and applying editing operations. For example, you can trim a clip from a longer piece of media and apply effects to the video track to share over social media, or transcode media into a more efficient codec for upload to a server.

The overall goal of Transformer is to provide an easy to use, reliable and performant API for transcoding and editing media, including support for customizing functionality, following the same API design principles to ExoPlayer. The library is supported on devices running Android 5.0 Lollipop (API 21) onwards and includes device-specific optimizations, giving developers a strong foundation to build on. This post gives an introduction to the new functionality and describes some of the many features we're planning for upcoming releases!


Getting Started

Most operations with Transformer will follow the same general pattern:

  1. Configure a TransformationRequest with settings like your desired output format
  2. Create a Transformer and pass it your TransformationRequest
  3. Apply additional effects and edits
  4. Attach a listener to react to completion events
  5. Start the transformation

Of course, depending on your desired transformations, you may not need every step. Here's an example of transcoding an input video to the H.265/HEVC video format and removing the audio track.

// Create a TransformationRequest and set the output format to H.265 val transformationRequest = TransformationRequest.Builder().setVideoMimeType(MimeTypes.VIDEO_H265).build() // Create a Transformer val transformer = Transformer.Builder(context) .setTransformationRequest(transformationRequest) // Pass in TransformationRequest .setRemoveAudio(true) // Remove audio track .addListener(transformerListener) // transformerListener is an implementation of Transformer.Listener .build() // Start the transformation val inputMediaItem = MediaItem.fromUri("path_to_input_file") transformer.startTransformation(inputMediaItem, outputPath)

During transformation you can get progress updates with Transformer.getProgress. When the transformation completes the listener is notified in its onTransformationCompleted or onTransformationError callback, and you can process the output media as needed.

Check out our documentation to learn about further capabilities in the Transformer APIs. You can also find details about using Transformer to accurately convert 10-bit HDR content to 8-bit SDR in the "Dealing with color washout" blog post to ensure your video's colors remain as vibrant as possible in the case that your app or the device doesn't support HDR content.


Edits, effects, and extensions

Media3 includes a set of core video effects for simple edits, such as scaling, cropping, and color filters, which you can use with Transformer. For example, you can create a Presentation effect to scale the input to 480p resolution while maintaining the original aspect ratio, and apply it with setVideoEffects:

Transformer.Builder(context) .setVideoEffects(listOf(Presentation.createForHeight(480))) .build()

You can also chain multiple effects to create more complex results. This example converts the input video to grayscale and rotates it by 30 degrees:

Transformer.Builder(context) .setVideoEffects(listOf( RgbFilter.createGrayscaleFilter(), ScaleToFitTransformation.Builder() .setRotationDegrees(30f) .build())) .build()

It's also possible to extend Transformer’s functionality by implementing custom effects that build on existing ones. Here is an example of subclassing MatrixTransformation, where we start zoomed in by 2 times, then zoom out gradually as the frame presentation time increases:

val zoomOutEffect = MatrixTransformation { presentationTimeUs -> val transformationMatrix = Matrix() val scale = 2 - min(1f, presentationTimeUs / 1_000_000f) // Video will zoom from 2x to 1x in the first second transformationMatrix.postScale(/* sx= */ scale, /* sy= */ scale) transformationMatrix // The calculated transformations will be applied each frame in turn } Transformer.Builder(context) .setVideoEffects(listOf(zoomOutEffect)) .build()

Here's a screen recording that shows this effect being applied in the Transformer demo app:

moving image showing what subclassing matrix transformation looks like in the Transformer demo app

For even more advanced use cases, you can wrap your own OpenGL code or other processing libraries in a custom GL texture processor and plug those into Transformer as custom effects. See the demo app for some examples of custom effects. The README also has instructions for trying a demo of MediaPipe integration with Transformer.


Coming soon

Transformer is actively under development but ready to use, so please give it a try and share your feedback! The Media3 development branch includes a sneak peek into several new features building on the 1.0 release described here, including support for tone-mapping HDR videos to SDR using OpenGL, previewing video effects using ExoPlayer.setVideoEffects, and custom audio processing. We are also working on support for editing multiple videos in more flexible compositions, with export from Transformer and playback through ExoPlayer, making Media3 an end-to-end solution for transforming media.

We hope you'll find Transformer an easy-to-use and powerful tool for implementing fantastic media editing experiences on Android! You can send us feature requests and bug reports in the Media3 GitHub issue tracker, and follow this blog to get updates on new features. Stay tuned for our upcoming talk “High quality Android media experiences” at Google I/O.

What’s new in multiplatform Jetpack libraries

Posted by Márton Braun, Developer Relations Engineer

To support developers who are already using Kotlin Multiplatform for sharing business logic across mobile platforms, we previously released experimental multiplatform previews of the Collections and DataStore Jetpack libraries, and we've been receiving great feedback from the community.

The multiplatform Collections and DataStore libraries are now moving from experimental developer previews to alpha releases, and will follow the normal release cycle of Jetpack libraries. Annotations, a core Jetpack library, is now also available for multiplatform.

Please note that Kotlin Multiplatform is still in beta, therefore the non-Android targets of these libraries don’t have Jetpack’s usual stability guarantees.

The alpha releases are available from Google’s Maven repository. You can try them by adding the following dependencies to your Kotlin Multiplatform project:

val commonMain by getting { dependencies { implementation("androidx.annotation:annotation:1.7.0-alpha02") implementation("androidx.collection:collection:1.3.0-alpha04") // Lower-level APIs with support for custom serialization implementation("androidx.datastore:datastore-core-okio:1.1.0-alpha03") // Higher-level APIs for storing values of basic types implementation("androidx.datastore:datastore-preferences-core:1.1.0-alpha03") } }

The multiplatform DiceRoller sample app has also been updated to use the new alpha version of DataStore.

To provide feedback on these multiplatform releases, create a bug on our issue tracker, or join the conversation in the Kotlinlang #multiplatform channel.

What’s new in WindowManager 1.1.0-beta01

Posted by Jon Eckenrode, Technical Writer, Software Engineering blog header featuring Android logos

The 1.1.0-beta01 release of Jetpack WindowManager continues the library’s steady progress toward stable release of version 1.1.0. The beta adds an assortment of new features and capabilities, which are ready for testing and early adoption today!

We need your feedback so we can make WindowManager work best for you. Add the 1.1.0-beta01 dependency to your app, follow the migration steps below (if you’re already using a previous version of the library), and let us know what you think!

Activity embedding

androidx.window.embedding

Activity embedding enables you to optimize your multi-activity apps for large screens. The 1.1.0-beta01 release augments and refactors the APIs to provide greater versatility, capability, and control in managing task window splits. We started with experimental APIs in 1.0.0 and are promoting them ultimately to stable in 1.1.0.

tl;dr

Added a manifest setting so you can inform the system your app has implemented activity embedding. Refactored SplitController to be more focused on split properties; extracted split rule APIs to RuleController and activity embedding APIs to ActivityEmbeddingController. Added the SplitAttributes class to describe embedding splits. Added the EmbeddingAspectRatio class to set a minimum ratio for applying activity embedding rules. Changed pixels units to display-independent pixels (dp). Enabled customization of split layouts. Added a tag to rules so that developers can identify and manage specific rules.

What’s new

PROPERTY_ACTIVITY_EMBEDDING_SPLITS_ENABLED

  • Added as a boolean property of the <application> tag in the app manifest.

ActivityEmbeddingController

  • Added class for operations related to the Activity or ActivityStack classes.

  • Includes isActivityEmbedded() to replace the API in SplitController.

RuleController

  • Added class for operations related to the EmbeddingRule class and subclasses.
  • Includes the following APIs to replace APIs in SplitController:
    • addRule() — Adds a rule or updates the rule that has the same tag.
    • removeRule() — Removes a rule from the collection of registered rules.
    • setRules() — Establishes a collection of rules.
    • clearRules() — Removes all registered rules.
    • parseRules() — Parses rules from XML rule definitions.

          SplitAttributes

          • Added class to define the split layout.

          EmbeddingAspectRatio

          • Added class to define enum-like behavior constants related to display aspect ratio. Lets you specify when splits are enabled based on the parent window’s aspect ratio.

          See SplitRule for properties that use the constants.


          What’s changed

          EmbeddingRule

          • Added tag field for identification of split rules.

          SplitController
          • Refactored APIs to the following modules:
            • ActivityEmbeddingController
              • Moved isActivityEmbedded() to ActivityEmbeddingController.
            • RuleController
              • Removed the following APIs and replaced their functionality with RuleController APIs:
                • clearRegisteredRules()
                • getSplitRules()
                • initialize()
                • registerRule()
                • unregisterRule()
            • Deprecated isSplitSupported() method and replaced with splitSupportStatus property to provide more detailed information about why the split feature is not available.

            • The getInstance() method now has a Context parameter.

                    Note: The getInstance() methods of ActivityEmbeddingController and RuleController also have a Context parameter.
                    • Added SplitAttributes calculator functions to customize split layouts:
                      • setSplitAttributesCalculator()
                      • clearSplitAttributesCalculator()
                      • isSplitAttributesCalculatorSupported() to check whether the SplitAttributesCalculator APIs are supported on the device.

                    • Defined SplitSupportStatus nested class to provide state constants for the splitSupportStatus property. Enables you to modify app behavior based on whether activity embedding splits are supported in the current app environment.

                    SplitRule

                    • Added defaultSplitAttributes property which defines the default layout of a split; replaces splitRatio and layoutDirection.
                    • Added translation of the XML properties splitRatio and splitLayoutDirection to defaultSplitAttributes.
                    • Changed minimum dimension definitions to use density-independent pixels (dp) instead of pixels.
                      • Changed minWidth to minWidthDp with default value 600dp.
                      • Changed minSmallestWidth to minSmallestWidthDp with default value 600dp.
                      • Added minHeightDp property with default value 600dp.
                    • Added maxAspectRatioInHorizontal with default value ALWAYS_ALLOW.
                    • Added maxAspectRatioInPortrait with default value 1.4.
                    • Defined FinishBehavior nested class to replace finish behavior constants.
                    • Applied the property changes to the Builder nested class of SplitPairRule and SplitPlaceholderRule.

                        SplitInfo

                        • Replaced getSplitRatio() with getSplitAttributes() to provide additional split-related information.

                        Window layout

                        androidx.window.layout

                        The window layout library lets you determine features of app display windows. With the 1.1.0-beta01 release, you can now work in contexts other than just activities.

                        What’s changed

                        WindowInfoTracker

                        • Added non-activity UI context support in experimental.

                        WindowMetricsCalculator

                        • Added non-activity UI context support.

                        Migration steps

                        Take the next step and upgrade your app from a previous alpha version. And please let us know how we can further facilitate the upgrade process.

                        PROPERTY_ACTIVITY_EMBEDDING_SPLITS_ENABLED
                        • To enable activity embedding, apps must add the property to the <application> tag in the app manifest: 

                        < property android:name="android.window.PROPERTY_ACTIVITY_EMBEDDING_SPLITS_ENABLED" android:value="true" />

                        When the property is set to true, the system can optimize split behavior for the app early.


                        SplitInfo

                        • Check if the current split is stacked:

                        SplitInfo.splitAttributes.splitType is SplitAttributes.SplitType.ExpandContainersSplitType

                        • Check the current ratio:

                        if (SplitInfo.splitAttributes.splitType is SplitAttributes.SplitType.RatioSplitType) { val ratio = splitInfo.splitAttributes.splitType.ratio } else { // Ratio is meaningless for other types. }
                         

                        SplitController

                        • SplitController.getInstance()

                        changes to

                            SplitController.getInstance(Context)

                        • SplitController.initialize(Context, @ResId int)

                        changes to:

                        RuleController.getInstance(Context) .setRules(RuleController.parse(Context, @ResId int)) 
                        • SplitController.getInstance().isActivityEmbedded(Activity)
                        changes to:

                        ActivityEmbeddingController.getInstance(Context) .isActivityEmbedded(Activity)

                        • SplitController.getInstance().registerRule(rule)
                                  changes to:
                        RuleController.getInstance(Context).addRule(rule)

                        • SplitController.getInstance().unregisterRule(rule)

                        changes to:
                        RuleController.getInstance(Context).removeRule(rule)
                        • SplitController.getInstance().clearRegisteredRules()

                        changes to:
                        RuleController.getInstance(Context).clearRules()
                        • SplitController.getInstance().getSplitRules()

                        changes to:

                            RuleController.getInstance(Context).getRules() 


                        SplitRule

                        • Change minWidth to minWidthDp and minSmallestWidth to minSmallestWidthDp
                        • minWidthDp and minSmallestWidthDp now use dp units instead of pixels

                        Apps can use the following call:

                        TypedValue.applyDimension( COMPLEX_UNIT_DIP, minWidthInPixels, resources.displayMetrics )

                        or simply divide minWithInPixels by displayMetrics#density.  


                        SplitPairRule.Builder

                        • SplitPairRule.Builder( filters, minWidth, minSmallestWidth )

                        changes to:

                        SplitPairRule.Builder(filters) // Optional if minWidthInDp argument is 600. .setMinWidthDp(minWidthInDp) // Optional if minSmallestWidthInDp argument is 600. .setMinSmallestWidthDp(minSmallestWidthInDp)

                        • setLayoutDirection(layoutDirection) and setSplitRatio(ratio)

                        change to:

                        setDefaultSplitAttributes(SplitAttributes.Builder() .setLayoutDirection(layoutDirection) .setSplitType(SplitAttributes.SplitType.ratio(ratio)) .build() )

                        • setFinishPrimaryWithSecondary and setFinishSecondaryWithPrimary take the FinishBehavior enum-like constants.

                        See SplitRule migrations for details.

                        • Use:
                        setMaxAspectRatioInPortrait( EmbeddingAspectRatio.ALWAYS_ALLOW )
                        to show splits on portrait devices.

                        SplitPlaceholder.Builder

                        • Has only filters and placeholderIntent parameters; other properties move to setters.
                        See SplitPairRule.Builder migrations for details.  
                        • setFinishPrimaryWithPlaceholder takes the FinishBehavior enum-like constants.
                        See finish behavior migrations for details.

                        • setLayoutDirection(layoutDirection) and setSplitRatio(ratio)

                        change to 

                        setDefaultSplitAttributes(SplitAttributes.Builder() .setLayoutDirection(layoutDirection) .setSplitType(SplitAttributes.SplitType.ratio(ratio)) .build() )

                        See layout direction migrations for details.

                        • Use:

                        setMaxAspectRatioInPortrait( EmbeddingAspectRatio.ALWAYS_ALLOW )

                        to show splits on portrait devices.

                         
                        Finish behavior

                        Finish behavior constants must be migrated to FinishBehavior enum-like class constants:

                        • FINISH_NEVER changes to FinishBehavior.NEVER
                        • FINISH_ALWAYS changes to FinishBehavior.ALWAYS
                        • FINISH_ADJACENT changes to FinishBehavior.ADJACENT

                        Layout direction

                        Layout direction must be migrated to SplitAttributes.LayoutDirection:

                        • ltr changes to SplitAttributes.LayoutDirection.LEFT_TO_RIGHT
                        • rtl changes to SplitAttributes.LayoutDirection.RIGHT_TO_LEFT
                        • locale changes to SplitAttributes.LayoutDirection.LOCALE
                        • splitRatio migrates to SplitAttributes.SplitType.ratio(splitRatio)


                        Get started 

                        To get started with WindowManager, add the Google Maven repository to your app’s settings.gradle or project-level build.gradle file: 


                        dependencyResolutionManagement {

                            repositoriesMode.set(RepositoriesMode.FAIL_ON_PROJECT_REPOS)

                            repositories {

                                google()

                            }

                        }

                         Then add the 1.1.0-beta01 dependency to your app’s module-level build.gradle file: 


                        dependencies {

                            implementation 'androidx.window:window:1.1.0-beta01'

                            . . .

                        }

                         Happy coding!

                        What’s new in the Jetpack Compose March ’23 release

                        Posted by Jolanda Verhoef, Android Developer Relations Engineer

                        Today, as part of the Compose March ‘23 Bill of Materials, we’re releasing version 1.4 of Jetpack Compose, Android's modern, native UI toolkit that is used by apps such as Booking.com, Pinterest, and Airbnb. This release contains new features like Pager and Flow Layouts, and new ways to style your text, such as hyphenation and line-break behavior. It also improves the performance of modifiers and fixes a number of bugs.

                        Swipe through content with the new Pager composable

                        Compose now includes out-of-the-box support for vertical and horizontal paging between different content. Using VerticalPager or HorizontalPager enables similar functionality to the ViewPager in the view system. However, just like the benefits of using LazyRow and LazyColumn, you no longer need to create an adapter or fragments! You can simply embed a composable inside the Pager:

                        // Display 10 items HorizontalPager(pageCount = 10) { page -> // Your specific page content, as a composable: Text( text = "Page: $page", modifier = Modifier.fillMaxWidth() ) }

                        ALT TEXT

                        These composables replace the implementation in the Accompanist library. If you already use the Accompanist implementation, check out the migration guide. See the Pager documentation for more information.

                        Get your content flowing with the new Flow Layouts

                        FlowRow and FlowColumn provide an efficient and compact way to lay out items in a container when the size of the items or the container are unknown or dynamic. These containers allow the items to flow to the next row in the FlowRow or next column in the FlowColumn when they run out of space. These flow layouts also allow for dynamic sizing using weights to distribute the items across the container.

                        Here’s an example that implements a list of filters for a real estate app:

                        ALT TEXT

                        @Composable fun Filters() { val filters = listOf( "Washer/Dryer", "Ramp access", "Garden", "Cats OK", "Dogs OK", "Smoke-free" ) FlowRow( horizontalArrangement = Arrangement.spacedBy(8.dp) ) { filters.forEach { title -> var selected by remember { mutableStateOf(false) } val leadingIcon: @Composable () -> Unit = { Icon(Icons.Default.Check, null) } FilterChip( selected, onClick = { selected = !selected }, label = { Text(title) }, leadingIcon = if (selected) leadingIcon else null ) } } }

                        Performance improvements in Modifiers

                        The major internal Modifier refactor we started in the October release has continued, with the migration of multiple foundational modifiers to the new Modifier.Node architecture. This includes graphicsLayer, lower level focus modifiers, padding, offset, and more. This refactoring should bring performance improvements to these APIs, and you don't have to change your code to receive these benefits. Work on this continues, and we expect even more gains in future releases as we migrate Modifiers outside of the ui module. Learn more about the rationale behind the changes in the ADS talk Compose Modifiers deep dive.

                        Increased flexibility of Text and TextField

                        Along with various performance improvements, API stabilizations, and bug fixes, the compose-text 1.4 release brings support for the latest emoji version, including backwards compatibility with older Android versions 🎉🙌. Supporting this requires no changes to your application. If you’re using a custom emoji solution, make sure to check out PlatformTextStyle(emojiSupportMatch).

                        In addition, we’ve addressed one of the main pain points of using TextField. In some scenarios, a text field inside a scrollable Column or LazyColumn would be obscured by the on-screen keyboard after being focused. We re-worked core parts of scroll and focus logic, and added key APIs like PinnableContainer to fix this bug.

                        Finally, we added a lot of new customization options to Text and its TextStyle:

                        • Draw outlined text using TextStyle.drawStyle.
                        • Improve text transition and legibility during animations using TextStyle.textMotion.
                        • Configure line breaking behavior using TextStyle.lineBreak. Use built-in semantic configurations like Heading, Paragraph, or Simple, or construct your own LineBreak configuration with the desired Strategy, Strictness, and WordBreak values.
                        • Add hyphenation support using TextStyle.hyphens.
                        • Define a minimum number of visible lines using the minLines parameter of the Text and TextField composables.
                        • Make your text move by applying the basicMarquee modifier. As a bonus, because this is a Modifier, you can apply it to any arbitrary composable to make it move in a similar marquee-like fashion!
                        • ALT TEXT
                          Marquee text using outline with shapes stamped on it using the drawStyle API.

                        Improvements and fixes for core features

                        In response to developer feedback, we have shipped some particularly in-demand features & bug fixes in our core libraries:
                        • Test waitUntil now accepts a matcher! You can use this API to easily synchronize your test with your UI, with specific conditions that you define.
                        • animatedContent now correctly supports getting interrupted and returning to its previous state.
                        • Accessibility services focus order has been improved: the sequence is now more logical in common situations, such as with top/bottom bars.
                        • AndroidView is now reusable in LazyList if you provide an optional onReset lambda. This improvement lets you use complex non-Compose-based Views inside LazyLists.
                        • Color.lerp performance has been improved and now does zero allocations: since this method is called at high frequency during fade animations, this should reduce the amount of garbage collection pauses, especially on older Android versions.
                        • Many other minor APIs and bug fixes as part of a general cleanup. For more information, see the release notes.

                        Get started!

                        We’re grateful for all of the bug reports and feature requests submitted to our issue tracker - they help us to improve Compose and build the APIs you need. Continue providing your feedback, and help us make Compose better!

                        Wondering what’s next? Check out our updated roadmap to see the features we’re currently thinking about and working on. We can’t wait to see what you build next!

                        Happy composing!

                        Compose for Wear OS 1.1 is now stable: check out new features!

                        Posted by Kseniia Shumelchyk, Android Developer Relations Engineer

                        Today we’re releasing version 1.1 of Compose for Wear OS, our modern declarative UI toolkit to help developers build beautiful, responsive apps for Wear OS.

                        Since the first stable release earlier this year, we have seen many developers taking advantage of the powerful tools and intuitive APIs to make building their app simpler and more efficient. Todoist and Outdooractive are some of the developers that rebuilt their Wear apps with Compose and accelerated the delivery of a new, functional user experience.

                        Todoist increased its growth rate by 50% since rebuilding their app for Wear 3 and Outdooractive reduced development time by 30% and saw a significant boost in developer productivity and better design/developer collaboration:

                        “Compose makes the UI code more intuitive to write and read, allowing us to prototype faster in the design phase and also collaborate better on the code. What would have taken us days now takes us hours.”

                        The Compose for Wear OS 1.1 release contains new features and brings improvements to existing components, focusing on UX and accessibility. We’ve already updated our samples, codelab, and Horologist libraries to work with Compose for Wear OS 1.1.


                        New features and APIs

                        The Compose for Wear OS 1.1 release includes the following new functionality (baseline profiles already added for new components):

                        Outlined style for Chips and Buttons

                        To give you additional ability to customize the user interface, we added outlined styles for Chips and Buttons. New OutlinedChip and OutlinedButton composables provide a transparent component with a thin border that can be used for medium-emphasis actions. Also available for compact versions: OutlinedCompactChip and OutlinedCompactButton.
                        Demonstration of OutlinedChip and OutlinedButton composables on a round watch face
                        OutlinedChip and OutlinedButton composables

                        Modifying Chip and Button shapes

                        Starting from version 1.1, you can also modify shapes for Chip/ToggleChip and Button/ToggleButton components using new functions overloads.
                        Demonstration of Different Chip and Button shapes on a round watch face
                        Different Chip and Button shapes

                        Placeholder API

                        A new experimental API has been added to implement placeholder support. This can be used to achieve three distinct visual effects separately or all together:

                        • A placeholder background brush effect used in containers such as Chip and Cards to draw over the normal background when waiting for content to load.
                        • A Modifier.placeholder() to draw a stadium shaped placeholder widget over the top of content that is being loaded.
                        • A Modifier.placeholderShimmer() for gradient/shimmer effect that is drawn over the top of the other effects to indicate to users that the current state is waiting for data to load.
                        These effects are designed to be coordinated and shimmer and wipe-off in an orchestrated fashion.
                        Moving demonstration of Placeholder API usage examples on a round watch face
                        Placeholder API usage examples

                        Check out the reference docs and sample in Horologist to see how to apply the placeholder to common use cases, such as a Chip with icon and a label that puts placeholder over individual content slots and draws a placeholder shimmer on top while waiting for data to load.

                        Modifier.scrollAway

                        Horologist’s fadeAway modifier has been graduated to scrollAway modifier in version 1.1. Modifier.scrollAway scrolls an item vertically in and out of view, based on the scroll state, and already has overloads to work with Column, LazyColumn and ScalingLazyColumn.

                        Use this modifier to make TimeText fade out of the view as the user starts to scroll a list of items upwards.
                        Moving demonstration of ScrollAway modifier usage with TimeText on a round watch face
                        ScrollAway modifier usage with TimeText

                        Additional parameters in CurvedTextStyle

                        CurvedTextStyle now supports additional parameters (fontFamily, fontWeight, fontStyle, fontSynthesis) to specify font details when creating a curved text style. Extended curved text style can be used on both curvedText and basicCurvedText.

                        Demonstration of applying different font to curved text on a round watch face
                        Applying different font to curved text

                        UX and accessibility improvements

                        The 1.1 release also focuses on bringing a refined user experience, improvements for TalkBack support and overall better accessibility:

                        • ToggleChip and SplitToggleChip support usage of animated toggle controls (Checkbox, Switch and RadioButton) that can be used instead of the static icons provided by ToggleChipDefaults.
                        • Default gradient colors for Chip/ToggleChip and Cards were adjusted to match the latest UX specification.
                        • Updated a number of the default colors in the MaterialTheme to improve accessibility as the original colors did not have sufficient contrast.
                        • Accessibility improvements to Picker so that multi-picker screens are navigable with screen readers and the content description is accessible.
                        • InlineSlider and Stepper now have button roles, so that TalkBack can recognize them as buttons.
                        • The PositionIndicator in Scaffold is now positioned and sized so that it only takes the space needed. This is useful when semantic information is added to it, so TalkBack gets the correct bounds of the PositionIndicator on screen.

                        It’s time ⌚ to bring your app to the wrist!

                        Get started

                        To begin developing with Compose for Wear OS, get started with hands-on experience trying our codelab, and make sure to check out the documentation and samples. Visit Compose for Wear OS release notes for full list of changes available in version 1.1.

                        Note that using version 1.1 of Compose for Wear OS requires using the version 1.3 of androidx.compose libraries and therefore Kotlin 1.7.10. Check out the Compose to Kotlin Compatibility Map for more information.

                        Provide feedback

                        Compose for Wear OS continues to evolve with the features you’ve been asking for. Please do continue providing us feedback on the issue tracker and join Kotlin Slack #compose-wear channel to connect with the Google team and dev community.

                        We’re excited to see a growing number of apps using Compose for Wear OS in production, and we’re grateful for all issues and requests that help us to make the toolkit better!

                        Start building for Wear OS now

                        Discover even more with technical sessions from the Android Dev Summit providing guidance on app architecture, testing, handling rotary input, and verticalized sessions for media and fitness.

                        Power your Wear OS fitness app with the latest version of Health Services

                        Posted by Breana Tate, Developer Relations EngineerThe Health Services API enables developers to use on-device sensor data and related algorithms to provide their apps with high-quality data related to activity, exercise, and health. What’s more, you don’t have to choose between conserving battery life and delivering high frequency data–Health Services makes it possible to do both. Since announcing Health Services Alpha at I/O ‘21, we’ve introduced a number of improvements to the platform aimed at simplifying the development experience. Read on to learn about the exciting features from Health Services Beta in Android Jetpack that your app will be able to take advantage of when you migrate from Alpha.


                        Capture more with new metrics

                        The Health Services Jetpack Beta introduces new data and exercise types, including DataType.GOLF_SHOT_COUNT, ExerciseType.HORSE_RIDING, and ExerciseType.BACKPACKING. You can review the full list of new exercise and data types here. These supplement the already large library of data and exercise types available to developers building Wear OS apps with Health Services. Additionally, we’ve added the ability to listen for health events, such as fall detection, through PassiveMonitoringClient.

                        In addition to new data types, we’ve also introduced a new organization model for data in Health Services. This new model makes the Health Services API more type-safe by adding additional classification information to data types and data points, reducing the chance of errors in code. In Beta, all DataPoint types have their own subclass and are derived from the DataPoint class. You can choose from:

                        • SampleDataPoints 
                        • IntervalDataPoints 
                        • StatisticalDataPoints
                        • CumulativeDataPoints

                        DataTypes are categorized as AggregateDataTypes or DeltaDataTypes.

                        As a result of this change, Health Services can guarantee the correct type at compile time instead of at runtime, reducing errors and improving the developer experience. For example, location data points are now represented as a strongly-typed LocationData object instead of as a DoubleArray. Take a look at the example below:

                        Previously:

                        exerciseUpdate.latestMetrics[DataType.LOCATION]?.forEach {
                          val loc = it.value.asDoubleArray()

                          val lat = loc[DataPoints.LOCATION_DATA_POINT_LATITUDE_INDEX]
                          val lon = loc[DataPoints.LOCATION_DATA_POINT_LONGITUDE_INDEX]
                          val alt = loc[DataPoints.LOCATION_DATA_POINT_ALTITUDE_INDEX]

                          println("($lat,$lon,$alt) @ ${it.startDurationFromBoot}")
                        }

                        Health Services Beta:

                        exerciseUpdate.latestMetrics.getData(DataType.LOCATION).forEach {
                          // it.value is of type LocationData
                          val loc = it.value
                          val time = it.timeDurationFromBoot
                          println("loc = [${loc.latitude}, ${loc.longitude}, ${loc.altitude}] @ $time")

                        }

                        As you can see, due to the new approach, Health Services knows that loc is of type List<SampleDataPoint<LocationData>> because DataType.LOCATION is defined as a DeltaDataType<LocationData, SampleDataPoint<LocationData>>.


                        Consolidated exercise end state

                        ExerciseState is now included within ExerciseUpdate’s ExerciseStateInfo property. To give you more control over how your app responds to an ending exercise, we’ve added new ExerciseStates called ExerciseState.ENDED and ExerciseState.ENDING to replace what was previously multiple variations of ended and ending states. These new states also include an endReason, such as USER_END, AUTO_END_PREPARE_EXPIRED, and AUTO_END_PERMISSION_LOST.

                        The following example shows how to check for exercise termination:

                        val callback = object : ExerciseUpdateCallback {
                            override fun onExerciseUpdateReceived(update: ExerciseUpdate) {
                                if (update.exerciseStateInfo.state.isEnded) {
                                    // Workout has either been ended by the user, or otherwise terminated
                                    val reason = update.exerciseStateInfo.endReason
                                }
                                ...
                            }
                            ...
                        }


                        Improvements to passive monitoring

                        Health Services Beta also transitions to a new set of passive listener APIs. These changes largely focus on making daily metrics better typed and easier to integrate. For example, we renamed the PassiveListenerConfig function setPassiveGoals to setDailyGoals. This change reinforces that Health Services only supports daily passive goals.We’ve also condensed multiple APIs for registering Passive Listeners into a single registration call. Clients can directly implement the desired overrides for only the data your app needs.

                        Additionally, the Passive Listener BroadcastReceiver was replaced by the PassiveListenerService, which offers stronger typing, along with better reliability and performance. Clients can now register both a service and a callback simultaneously with different requests, making it easier to register a callback for UI updates while reserving the background request for database updates.


                        Build for even more devices on Wear OS 3

                        Health Services is only available for Wear OS 3. The Wear OS 3 ecosystem now includes even more devices, which means your apps can reach even more users. Montblanc, Samsung, and Fossil are just a few of the OEMs that have recently released new devices running Wear OS 3 (with more coming later this year!). The newly released Pixel Watch also features Fitbit health tracking powered by Health Services.

                        If you haven’t used Health Services before, now is the time to try it out! And if your app is still using Health Services Alpha, here is why you should consider migrating:

                        • Ongoing Health Services Development: Since Health Services Beta is the newest version, bug fixes and feature improvements are likely to be prioritized over older versions.
                        • Prepares your app infrastructure for when Health Services goes to stable release
                        • Improvements to type safety - less chance of error in code!
                        • Adds additional functionality to make it easier to work with Health Services data

                        You can view the full list of changes and updated documentation at developer.android.com.


                        Android Dev Summit ‘22: What’s new in Jetpack

                        Posted by Amanda Alexander, Product Manager

                        Android Jetpack is a key component of Modern Android Development. It is a suite of over 100 libraries, tools and guidance to help developers follow best practices, reduce boilerplate code, and write code that works consistently across Android versions and devices. By using Android Jetpack to improve your productivity, you can focus on building unique features for your app. 

                        Most apps on Google Play use Jetpack as a key component of their app architecture, in fact over 90% of the top 1000 apps use Android Jetpack.

                        At ADS this week we released updates to three major areas of Jetpack:

                        1. Architecture Libraries and Guidance
                        2. Application Performance
                        3. User Interface Libraries and Guidance

                        We’ll take a moment to expand on each of these areas and then conclude with some additional updates that we also shipped.

                        Let’s go…


                        Architecture Libraries and Guidance

                        App architecture libraries and components ensure that apps are robust, testable, and maintainable.

                        Managing tasks with WorkManager

                        The WorkManager library makes it easy to schedule deferrable, asynchronous tasks that must be run reliably for instance uploading backups or analytics. These APIs let you create a task and hand it off to WorkManager to run when the work constraints are met.

                        WorkManager 2.8.0-alpha04 has been updated with the ability to update WorkRequests in a non-intrusive way, preserving original enqueue time, chaining and more. It makes changes to Worker’s constraints much easier, e.g. if constraints need to be changed from one version of an application to another or via configuration set by server-side. Previously, it was possible to achieve only by canceling already scheduled workers and rescheduling them again. However, this approach was very disruptive: already running workers could have been canceled, cadence of periodic workers could have been broken, and the whole chains of workers required reconstruction if one of them needed an update. Now using the update method or ExistingPeriodicWorkPolicy.UPDATE developers don’t have to worry about any of these issues. 


                        Data Persistence

                        Most applications need to persist local state - whether it be caching results, managing local lists of user enter data, or powering data returned in the UI.  Room is the recommended data persistence layer which provides an abstraction layer over SQLite, allowing for increased usability and safety over the platform.

                        In Room 2.5.0-alpha03, we added a new shortcut annotation, @Upsert, which attempts to insert an entity when there is no uniqueness conflict or update the entity if there is a conflict. Moreover, all of Room runtime APIs along with androidx.sqlite have been converted to Kotlin, creating a better experience for Kotlin users such as strict nullability and opening the door to support other Kotlin language features. 


                        Android 13 Activity APIs, now backward compatible

                        The Activity library includes the ComponentActivity class, a base class built on top of the Android framework’s Activity class which provides APIs that enable Jetpack Compose, other Architecture Components, and support for backporting new features introduced in Android 13 with Activity 1.6.1.

                        By using ComponentActivity directly, or either of its subclasses of FragmentActivity or AppCompatActivity, you can use a single API to pick images via the Photo Picker when it is available with an automatic fallback to the Storage Access Framework to allow support back to Android 4.4 (API 19).

                        You’ll also be set up for the future with support for the Predictive back gesture introduced in Android 13 simply by upgrading to Activity 1.6.1. The Activity APIs provide a single API for custom back navigation that works back to API 14 and is fully compatible with opting in to predictive back gestures.


                        Testing Pagination with the Paging Testing library

                        The Paging library provides support for loading very large data sets. To get the most of Paging, the integration includes multiple layers of your application - your repository layer, ViewModel layer, and your UI.

                        Paged data flows from the PagingSource or RemoteMediator components in the repository layer to the Pager component in the ViewModel layer. Then the Pager component exposes a Flow of PagingData to the PagingDataAdapter in the UI layer.
                        To make it easier to test that integration, Paging 3.2.0-alpha03 introduces a new paging-testing artifact with test specific APIs to make it possible to test each layer in isolation. This first release focuses on the repository layer and specifically around testing a custom PagingSource via the new TestPager APIs to ensure you can test your paging sources in different scenarios that are harder to reproduce with integration tests.


                        New Architecture Documentation

                        Investing in Architecture is important to improve the quality of your app by making it more robust, testable, maintainable, and scalable. That's why our recommendations on Architecture keep growing! In fact, they've grown so much we released a new Architecture recommendations page that consolidates and centralizes important best practices you can find in our docs.

                        The team recently released new guidance on modularization. The guide is split into two parts: 

                        The UI layer docs got two new pages:

                        • The state holders and UI state page explains the different types of state holders you can find in the UI layer and which implementation you should use depending on the type of logic to perform. 
                        • The state production page that shows best practices about how to model and expose UI state depending on the sources of state change. 

                        Due to popular demand, the UI events page has been updated with examples of Navigation UI events. We also released new Navigation guidance about providing runtime type safety to the Kotlin DSL and Navigation Compose.

                        Lastly, if you need to make your app work offline, we got you covered. The build an offline-first app guide helps you design your app to properly handle reads and writes, and deal with sync and conflict resolution in a device with no Internet connectivity.


                        New ViewModel Documentation

                        This updated guidance is designed to make it easier to understand when ViewModels are the right tool to reach for when building your UI layer.


                        Application Performance

                        Using performance libraries allows you to build performant apps and identify optimizations to maintain high performance, resulting in better end-user experiences. 


                        Improving Start-up Times

                        App speed can have a big impact on a user’s experience, particularly when using apps right after installation. To improve that first time experience, we are continuing to enhance Baseline Profiles. Baseline Profiles allow apps and libraries to provide the Android run-time with metadata about code path usage, which it uses to prioritize ahead-of-time compilation. This profile data is aggregated across libraries and lands in an app’s APK as a baseline.prof file, which is then used at install time to partially pre-compile the app and its statically-linked library code. This can make your apps load faster and reduce dropped frames the first time a user interacts with an app. 

                        With AGP 7.3, baseline profile tooling is fully stable, so you don't need alpha dependencies to get a 30%+ performance boost to your app's initial launch and scroll after each app update. 

                        In profileinstaller:1.3.0-alpha01, ProfileVerifier allows you to inspect profile compilation in the field, and starting in Android Studio Flamingo Canary 6, the Studio APK Inspector now shows the contents of your APK's baseline profiles.


                        Accurate reporting of startup metrics

                        Startup metrics are an important part of measuring your app’s performance, but the system (and the Benchmark libraries!) need a signal that marks the completion of the startup phase. That signal is the Activity’s call to reportFullyDrawn()Activity 1.7.0-alpha01 added new APIs in the form of the FullyDrawnReporter APIs that allows multiple components to report when they are ready for interaction. ComponentActivity will wait for all components to complete before calling reportFullyDrawn() on your behalf.

                        These APIs are encouraged to enable:

                        • Signaling the Android Runtime when startup completes, to ensure all of the code run during a multi-frame startup sequence is included and prioritized for background compilation.
                        • Signaling Macrobenchmark and Play Vitals when your application should be considered fully drawn for startup metrics, so you can track performance.

                        Two Activity Compose APIs, ReportDrawnWhen and ReportDrawnAfter, have been added to make it more convenient to use the FullyDrawnReporter from individual composables.


                        Recomposition Tracing

                        We recently launched the first alpha of Jetpack Compose Composition Tracing, a tool that allows you to see composable functions in the Android Studio system trace profiler. This feature combines the benefits of low intrusiveness from system tracing with method tracing levels of detail in your compositions. By adding a dependency on Compose Runtime Tracing, you will be able to see traces of your recomposition call stack in Android Studio Flamingo Canary 5 system traces and click on them to navigate straight to the code! You can read more about the feature and how to set it up in your project here.
                        UI screenshot of composables in the system trace
                        Composables in the system trace

                        User Interface Libraries and Guidance


                        Jetpack Compose

                        Jetpack Compose, Android’s modern toolkit for building native UI, has launched the Compose October ‘22 release which includes many performance improvements and adds support for staggered grids, drawing text directly to canvas, and pull to refresh. We also published our first Bill of Materials (BOM) to simplify the process of adding Compose library versions to your Gradle dependencies. Check out the What’s New in Jetpack Compose blog post to learn more.


                        Wear Tiles Material Library

                        Tiles for Wear OS give users glanceable access to information and actions. To help you create tiles, we launched the Tiles Material library, which includes built-in support for Material Design for Wear OS.

                        The included components are:

                        • Button - a clickable, circular-shaped object, with either icon, text or image with 3 predefined sizes.
                        • Chip - a clickable, stadium-shaped object that can contain an icon, primary and secondary labels, and has fixed height and customizable width.
                        • CompactChipTitleChip - two variations of the standard Chip that have smaller and larger heights, respectively, and can contain one line of text.
                        • CircularProgressIndicator - a colored arc around the edge of the screen with the given start and end angles, which can describe a full or partial circle with the full progress arc behind it.
                        • Text - a text element which uses the recommended Wear Material typography styles.

                        common tile components. a round icon with a pencil labelled 'button'. a full width rectangle with rounded corners and text labelled 'chip'. similar components, one larger and one smaller, labelled 'title chip' and 'compact chip' respectively. a circle path filled 75% clockwise labelled 'circular progress indicator' and finally text labelled 'text with recommended typography pre-set'

                        In addition to components, there are several recommended tile layouts within Material guidelines. Read more about Wear OS Tiles Material Library in this blog.

                         

                        Add Splash Screen to more devices

                        The core SplashScreen library brings the new Android 12 splash screen to all devices from API 23. Using the splash screen library, your application doesn't need any custom SplashScreen Activity and leverages the right APIs for a fast launch of your application. To use it, simply follow the steps outlined in our guide. For more information about the Android 12 splash screen, visit the official documentation.

                         

                        Other key updates


                        Camera 

                        The CameraX library makes it easier to add camera capabilities to your app. In 1.2.0-beta01, a new library camera-mlkit-vision was added. It enables an easy integration of CameraX with many MLKit features, including barcode scanning, face detection, text detection, etc. You can find the sample code here. We also added a new experimental Zero-Shutter Lag API which optimizes capture pipeline to have better latency while keeping good image quality. 


                        Annotation

                        The Annotation library exposes metadata that helps tools and other developers understand your app's code. It provides familiar annotations like @NonNull that pair with lint checks to improve the correctness and usability of your code.

                        Annotation 1.5 stable release has been fully migrated to Kotlin sources, resulting in support for Kotlin-specific target use sites and other Kotlin-compatible annotation features.


                        Kotlin Multiplatform

                        We have been experimenting with Kotlin Multiplatform Mobile from Jetbrains to enable code sharing across platforms. We have experimental previews of the Collections and DataStore libraries for apps targeting Android and iOS and we would like your feedback! Read more here



                        This was a brief tour of all the key changes in Jetpack over the past few months. For more details on each Jetpack library, check out the AndroidX release notes, quickly find relevant libraries with the API picker and watch the Google ADS talks for additional highlights.