Tag Archives: mobile

Kakao Games increased FPS stability to 96% through Android Adaptability

Posted by Dohyun Kim, Developer Relations Engineer, Android Games

Finding the balance between graphics quality and performance

Ares: Rise of Guardians is a mobile-to-PC sci-fi MMORPG developed by Second Dive, a game studio based in Korea known for its expertise in developing action RPG series and published by Kakao Games. Set in a vast universe with a detailed, futuristic background, Ares is full of exciting gameplay and beautifully rendered characters involving combatants wearing battle suits. However, because of these richly detailed graphics, some users’ devices struggled to handle the gameplay without affecting the performance.

For some users, their device would overheat after just a few minutes of gameplay and enter a thermally throttled state. In this state, the CPU and GPU frequency are reduced, affecting the game’s performance and causing the FPS to drop. However, as soon as the decreased FPS improved the thermal situation, the FPS would increase again and the cycle would repeat. This FPS fluctuation would cause the game to feel janky.

Adjust the performance in real time with Android Adaptability

To solve this problem, Kakao Games used Android Adaptability and Unity Adaptive Performance to improve the performance and thermal management of their game.

Android Adaptability is a set of tools and libraries to understand and respond to changing performance, thermal, and user situations in real time. These include the Android Dynamic Performance Framework’s thermal APIs, which provide information about the thermal state of a device, and the PerformanceHint API, which help Android choose the optimal CPU operating point and core placement. Both APIs work with the Unity Adaptive Performance package to help developers optimize their games.

Android Adaptability and Unity Adaptive Performance work together to adjust the graphics settings of your app or game to match the capabilities of the user’s device. As a result, it can improve performance, reduce thermal throttling and power consumption, and preserve battery life.

Moving image of gameplay from Ares: Rise of Guardians

Results

After integrating adaptive performance, Ares was better able to manage its thermal situation, which resulted in less throttling. As a result, users were able to enjoy a higher frame rate, and FPS stability increased from 75% to 96%.

In the charts below, the blue line indicates the thermal warning level. The bottom line (0.7) indicates no warning, the midline (0.8) is throttling imminent, and the upper line (0.9) is throttling. As you can see in the first chart, before implementing Android Adaptability, throttling happened after about 16 minutes of gameplay. In the second chart, you can see that after integration, throttling didn’t occur until around 22 minutes.

Graph showing high graphic quality setting measuring thermal headroom against thermal warning level in frames-per-second

Graph showing enabled android adaptability measuring thermal headroom against thermal warning level in frames-per-second

Kakao Games also wanted to reduce device heating, which they knew wasn’t possible with a continuously high graphic quality setting. The best practice is to gradually lower the graphical fidelity as device temperature increases to maintain a constant framerate and thermal equilibrium. So Kakao Games created a six-step change sequence with Android Adaptability, offering stable FPS and lower device temperatures. Automatic changes in fidelity are reflected in the in-game graphic quality settings (resolution, texture, shadow, effect, etc.) in the settings menu. Because some users want the highest graphic quality even if their device can’t sustain performance at that level, Kakao Games gave them the option to manually disable Unity Adaptive Performance.

Get started with Android Adaptability

Android Adaptability and Unity Adaptive Performance is now available to all Android game developers using the Android provider on most Android devices after API level 30 (thermal) and 31 (performance Hint API). Developers are able to use the Android provider from the Adaptive Performance 5.0.0 version. The thermal APIs are integrated with Adaptive Performance to help developers easily retrieve device thermal information and the performance Hint API is called every Update() automatically without any additional work.

Learn how Android Adaptability and Unity Adaptive Performance can help you stabilize your game’s FPS and reduce thermal throttling.

Celebrating 25 years of Google Search: developer trends and history

Posted by Google for Developers

This month, Google Search turns 25. A lot has changed over the last quarter of a century when it comes to the development space, but one thing has remained a constant - whether you’re stuck on a problem, reading documentation, learning about new technology, or figuring out the best tech stack for your project, Search has been a helpful tool in getting your questions answered.

What you searched for is a strong signal when it comes to developer trends across web, mobile, cloud, and AI over the years. Let’s take a look at some of the interesting things you’ve looked up* – and some funny queries too – because everyone loves a good retrospective.

*Note: Google Trends data goes as far back as 2004.


Building a better web

After the internet dot-com bubble popped in 2000–2001, the web continued to advance and the internet exploded. Web development responded by enabling designers to incorporate multimedia into web pages. Cascading Style Sheets (CSS) (released in 1997) and Flash video (1996-2017) changed the way web pages looked and moved, and streaming changed the way people consumed video. However, the basic interface and structure of the web page remained the same. With the variety of browsers that came to market, JavaScript frameworks and libraries rose along since it can be run everywhere with both CSS and HTML. All these shifts led to some fun searches.

How to center a div

You can’t think of web development without CSS. And it turns out, “how to center a div” has been searched for from the beginning - it’s also provided the internet with a wealth of memes over the years.

JavaScript libraries

JavaScript is a front-end programming language that is used to add interactivity and dynamic behavior to web pages. It is one of the most popular programming languages in the world, and it is essential for building modern web applications. But at some point, most developers have to ask themselves what kind of JavaScript they should use. Vanilla? A framework? A library?

Starting in 2007 there was an uptick of searches for jQuery, which peaked in 2013 and started to fall after that. Meanwhile, developers started to show more interest in React and Angular right around the same time as jQuery’s peak. By April of 2018 they all had a similar volume of searches, and soon after React took over, followed by Angular. Nigeria searched for React the most, while Japan preferred jQuery, and Ecuador preferred Angular. Nowadays, the choice of JavaScript framework is the subject of a lot of controversy - what's your favorite? Share your thoughts with us.

Graph showing search term volume for “React”,” jQuery”, and “Angular” from 2004-present day
Search term volume for “React”,” jQuery”, and “Angular” from 2004-present day


The rise of mobile

As the web improved, so did mobile. Phones went from cellular to smart. The app economy blossomed. Due to low infrastructure and financial restraints, many emerging markets in Asia, Africa, and Latin America skipped the desktop era in favor of mobile to get their information and entertainment. Mobile development –Android in particular– kicked into high gear as a response.

Android development

Starting in 2007, Android was released as a developer platform before devices were on the market, along with the first Android Developer Challenge which launched to support and recognize developers who build great applications. In 2008, the Android OS was released and open sourced, along with T-Mobile’s G1 as the first smartphone to run Android. That same year, the Android Market was released, allowing developers an easy way to distribute apps to the Android community. In 2012, the marketplace got rebranded to Google Play. All of this momentum helped add to the frenzy, but searches really took off starting in 2012.

Graph of search term volume for “Android development” from 2007-2012
Search term volume for “Android development” from 2007-2012

Mobilegeddon

Even web developers couldn’t escape the importance of mobile in its heyday. By 2010, “mobile-first” and “responsive design” became best practices for the web in order to support mobile traffic. As a response to the clear indication that mobile wasn’t going anywhere, by 2015, Google’s search ranking algorithm changed to favor content that is mobile-friendly. Dubbed ‘Mobilegeddon’ by Chuck Price in a post written on Search Engine Watch, developers quickly searched for the term and adjusted their best practices such as responsive and mobile-first design. By 2017, mobile traffic accounted for approximately half of web traffic worldwide before permanently surpassing it in 2020.


Moving to the cloud

Over the last 25 years, cloud development has evolved from a niche technology to a mainstream solution for organizations of all sizes. Being free from managing infrastructure and operations provides a number of advantages like cost savings, speed, and scalability. In the early days, it was mainly used for hosting static websites and applications. But as technology matured, it became increasingly popular for a wider range of applications, including IoT, big data, real-time data, and ML in addition to more modern development practices like containers, microservices, and security.

Cloud computing

As development continued to modernize, developers, IT, and operations figured out fairly quickly that managing infrastructure and servers was painful and expensive. In response, many cloud environment providers launched between 2002-2010, including Google Cloud Platform.

Graph of search term volume for “cloud computing” from 2004-2012
Search term volume for “cloud computing” from 2004-2012

Cloud databases

Cloud services extend to storage, databases, and so much more – a necessity as technology becomes more robust, supporting large amounts of data in real time from IoT devices or use cases like ML and large language models. While there were searches for the term “cloud database” as far back as 2004, it spiked in 2017, coinciding with Google Cloud’s Cloud Spanner. And with the latest renaissance of AI technology, it’s pretty likely that this search term will keep going up in the coming months and years.


Present day innovations

Disruptive developer technology like artificial intelligence and machine learning are infused in development today. From AI-assisted coding to solving problems leveraging big data, AI is permeating our lives. So it’s no wonder developers are searching for some key terms.

Artificial intelligence, machine learning, and more

While some applications of AI, ML, deep learning, large language models (LLMs) are new, most of the terms aren’t. Even in 2004, AI and ML were search terms of interest. In 2015, most of these terms started to pick back up and continue to trend upwards, with a sheer spike in interest in 2022. That same year, ‘generative AI’ was formally introduced to the world. Python is the most searched coding language closely associated with AI, becoming the most popularly searched language in 2019, finally surpassing Java.

Graph of search term volume for “artificial intelligence”, “machine learning”, “deep learning”, and “generative AI” from 2004-present day
Search term volume for “artificial intelligence”, “machine learning”, “deep learning”, and “generative AI” from 2004-present day

Looking ahead

While some aspects of development have gotten progressively cleaner, more modern, and more lightweight - there’s now more choice and complexity when it comes to your tech stack. So it’s no wonder “why is my code not working” spiked in both the early days and today. At Google, we’ll do our best to help streamline and simplify technology to help you build smarter and ship faster with new technology like Project IDX, Android Studio Bot, and coding for Bard.

Graph of search term volume for “why is my code not working?” from 2004-present day
Search term volume for “why is my code not working?” from 2004-present day

It’s inspiring to see what you have done with the answers to your questions, whether you’re trying to solve specific problems, learning new skills or best practices, figuring out what technology you want to use, or dreaming up your next big idea. We look forward to seeing what the next 25 years bring.

Follow more developer trends and insights on Google for Developers across YouTube, LinkedIn, and Instagram.

Latest ARTwork on hundreds of millions of devices

Posted by Serban Constantinescu, Product Manager

Wouldn’t it be great if each update improved start-up times, execution speed, and memory usage of your apps? Google Play system updates for the Android Runtime (ART) do just that. These updates deliver performance improvements, the latest security fixes, and unify the core OpenJDK APIs across hundreds of millions of devices, including all Android 12+ devices and soon Android Go.

ART is the engine behind the Android operating system (OS). It provides the runtime and core APIs that all apps and most OS services rely on. Both Java and Kotlin are compiled down to bytecode executed by ART. Improvements in the runtime, compiler and core API benefit all developers making app execution faster and bytecode compilation more efficient.

While parts of Android are customizable by device manufacturers, ART is the same for all devices and Google Play system updates enable a path to modular updates.

Modularizing the OS

Android was originally designed for monolithic updates, which meant that OS components did not need to have clear API boundaries. This is because all dependent software would be built together. However, this made it difficult to update ART independently of the rest of the OS. Our first challenge was to untangle ART's dependencies and create clear, well-defined, and tested API boundaries. This allowed us to modularize ART and make it independently updatable.

Illustration of a racecar with an engine part hovering above the hood. A curved arrow points to where this part should go

As a core part of the OS, ART had to blaze new trails and engineer new OS boundaries. These new boundaries were so extensive that manually adding and updating them would be too time-consuming. Therefore, we implemented automatic generation of those through introspection in the build system.

Another example is stack unwinding, which reports the functions last executed when an issue is detected. Before modularizing the OS, all stack unwinding code was built together and could change across Android versions. This made the transition even more challenging, since there is only one version of ART that is delivered to many versions of Android, we had to create a new API boundary as well as design it to be forward-compatible with newer versions of the ART APEX module on devices that are no longer getting full OS updates.

Recently, for Android 14, we refactored the interface between the Package Manager, the service that determines how to install and update apps, and ART. This moves the OS boundary from the ART dex2oat command line to a well-defined interface that enables future optimizations, such as finer-grained control over the compilation mode.

ART updatability also introduced new challenges. For example, the collection of Java libraries, referred to as the Boot Classpath, had to be securely recompiled to ensure good performance. This required introducing a new secure state for compilation during boot as well as a fallback JIT compilation mode.

On older devices, the secure compilation happens on the first reboot after an ART update. On newer devices that support the Android Virtualization Framework, the compilation happens while the device is idle, in an enclave called Isolated Compilation – saving up to 20 seconds of boot-time.

Testing the ART APEX module

The ART APEX module is a complex piece of software with an order of magnitude more APIs than any other APEX module. It also backs a quarter of the developer APIs available in the Android SDK. In addition, ART has a compiler that aims to make the most of the underlying hardware by generating chipset-specific instructions, such as Arm SVE. This, together with the multiple OS versions on which the ART APEX module has to run, makes testing challenging.

We first modularized the testing framework from per-platform release (e.g. Android CTS) to per module. We did this by introducing an ART-specific Mainline Test Suite (MTS), which tests both compiler and runtime, as well as core OpenJDK APIs, while collecting code coverage statistics.

Our target is 100% API coverage and high line coverage, especially for new APIs. Together with HWASan and fuzzing, all of the tests described above contribute to a massive test load that needs to be sharded across multiple devices to ensure that it completes in a reasonable amount of time.

Illustration of modularized testing framework

We test the upcoming ART release every day by compiling over 18 million APKs and running app compatibility tests, and startup, performance, and memory benchmarks on a variety of Android devices that replicate the diversity of our ecosystem as closely as possible. Once tests pass with all possible compilation modes, all Garbage Collector algorithms, and supported OS versions, we begin gradually rolling out the next ART release.

Benefits of ART Google Play system updates

By updating ART independently of OS updates, users get the latest performance optimizations and security fixes as quickly as possible, while developers get OpenJDK improvements and compiler optimisations that benefit both Java and Kotlin.

As shown in the graph below, the runtime and compiler optimizations in the ART 13 update delivered real-world app start-up improvements of up to 30% on some devices.

Graph of average app startup time showing startup time in milliseconds with improvement up to 30% across 12 weeks on devices running the latest ART Google Play system update

ART updates allow us to frequently deploy fixes with little additional effort from our ecosystem partners. They include propagating upstream OpenJDK fixes to Android devices as quickly as possible, as well as runtime and compiler security fixes, such as CVE-2022-20502, which was detected by our automated fuzzing tests.

For developers, ART updates mean that you can now target the latest programming features. ART 13 delivered OpenJDK 11 core language features, which was the fastest-ever adoption of a new OpenJDK release on Android devices.

What’s next

In the coming months, we'll be releasing ART 14 to all compatible devices. ART 14 includes OpenJDK 17 support along with new compiler and runtime optimizations that improve performance while reducing code size. Stay tuned for more details on ART 14!

Java and OpenJDK are trademarks or registered trademarks of Oracle and/or its affiliates.

Choosing the right storage experience

Posted by Yacine Rezgui - Developer Relations Engineer

The upcoming stable release of Android 14 is fast approaching. Now is a great time to test your app with this new release’s changes if you haven’t done so already. With Platform Stability, you can even submit apps targeting SDK 34 to the Google Play Store.

Android 14 introduces a new feature called Selected Photos Access, allowing users to grant apps access to specific images and videos in their library, rather than granting access to all media of a given type. This is a great way for users to feel more comfortable sharing media with apps, and it's also a great way for developers to build apps that respect user privacy.

Image in four panels showing Select Photos Access being used to share media from the user's on-device library
To ease the migration for apps that currently use storage permissions, apps will run in a compatibility mode. In this mode, if a user chooses “Select photos and videos” the permission will appear to be granted, but the app will only be able to access the selected photos. The permission will be revoked when your app process is killed or in the background for a certain time (similar to one time permissions). When the permission is once again requested by your app, users can select a different set of pictures or videos if they wish. Instead of letting the system manage this re-selection, it’s recommended for apps to handle this process to have a better user experience.

Image in four panels showing media reselection being used to update user's choice of which media to be shared

Choosing the right storage experience

Even when your app correctly manages media re-selection, we believe that for the vast majority of apps, the permissionless photo picker that we introduced last year will be the best media selection solution for both user experience and privacy. Most apps allow users to choose media to do tasks such as attaching to an email, changing a profile picture, sharing with friends, and the Android photo picker's familiar UI gives users a consistent, high-quality experience that helps users grant access in confidence, allowing you to focus on the differentiating features of your app. If you absolutely need a more tightly integrated solution, integrating with MediaStore can be considered as an alternative to the photo picker.


Android photo picker

Image of My Profile page on a mobile device

To use the photo picker in your app, you only need to register an activity result:

// Using Jetpack Compose, you should use rememberLauncherForActivityResult instead of registerForActivityResult // Registers a photo picker activity launcher in single-select mode val pickMedia = registerForActivityResult(PickVisualMedia()) { uri -> // Callback is invoked after the user selects a media item or closes the photo picker if (uri != null) { Log.d("PhotoPicker", "Selected URI: $uri") } else { Log.d("PhotoPicker", "No media selected") } }

The photo picker allows customization of media type selection between photos, videos, or a specific mime type when launched:

// Launch the photo picker and let the user choose images and videos. pickMedia.launch(PickVisualMediaRequest(PickVisualMedia.ImageAndVideo)) // Launch the photo picker and let the user choose only images. pickMedia.launch(PickVisualMediaRequest(PickVisualMedia.ImageOnly)) // Launch the photo picker and let the user choose only videos. pickMedia.launch(PickVisualMediaRequest(PickVisualMedia.VideoOnly)) // Launch the photo picker and let the user choose only images/videos of a // specific MIME type, like GIFs. pickMedia.launch(PickVisualMediaRequest(PickVisualMedia.SingleMimeType("image/gif")))

You can set a maximum limit when allowing multiple selections:

// Registers a photo picker activity launcher in multi-select mode. // In this example, the app lets the user select up to 5 media files. val pickMultipleMedia = registerForActivityResult(PickMultipleVisualMedia(5)) { uris -> // Callback is invoked after the user selects media items or closes the // photo picker. if (uris.isNotEmpty()) { Log.d("PhotoPicker", "Number of items selected: ${uris.size}") } else { Log.d("PhotoPicker", "No media selected") } }

Lastly, you can enable the photo picker support on older devices from Android KitKat onwards (API 19+) using Google Play services, by adding this entry to your AndroidManifest.xml file:

<!-- Prompt Google Play services to install the backported photo picker module --> <service android:name="com.google.android.gms.metadata.ModuleDependencies" android:enabled="false" android:exported="false" tools:ignore="MissingClass"> <intent-filter> <action android:name="com.google.android.gms.metadata.MODULE_DEPENDENCIES" /> </intent-filter> <meta-data android:name="photopicker_activity:0:required" android:value="" /> </service>

In less than 20 lines of code you have a well-integrated photo/video picker within your app that doesn’t require any permissions!


Creating your own gallery picker

Creating your own gallery picker requires extensive development and maintenance, and the app needs to request storage permissions to get explicit user consent, which users can deny, or, as of Android 14, limit access to selected media.

First, request the correct storage permissions in the Android manifest depending on the OS version:

<!-- Devices running up to Android 12L --> <uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" android:maxSdkVersion="32" /> <!-- Devices running Android 13+ --> <uses-permission android:name="android.permission.READ_MEDIA_IMAGES" /> <uses-permission android:name="android.permission.READ_MEDIA_VIDEO" /> <!-- To handle the reselection within the app on Android 14+ (when targeting API 33+) --> <uses-permission android:name="android.permission.READ_MEDIA_VISUAL_USER_SELECTED" />

Then, the app needs to request the correct runtime permissions, also depending on the OS version:

val requestPermissions = registerForActivityResult(RequestMultiplePermissions()) { results -> // Handle permission requests results // See the permission example in the Android platform samples: https://github.com/android/platform-samples } if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.UPSIDE_DOWN_CAKE) { requestPermissions.launch(arrayOf(READ_MEDIA_IMAGES, READ_MEDIA_VIDEO, READ_MEDIA_VISUAL_USER_SELECTED)) } else if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.TIRAMISU) { requestPermissions.launch(arrayOf(READ_MEDIA_IMAGES, READ_MEDIA_VIDEO)) } else { requestPermissions.launch(arrayOf(READ_EXTERNAL_STORAGE)) }

With the Selected Photos Access feature in Android 14, your app should adopt the new READ_MEDIA_VISUAL_USER_SELECTED permission to control media re-selection, and update your app’s UX to let users grant your app access to a different set of images and videos.

When opening the selection dialog, photos and/or videos will be shown depending on the permissions requested: if you're requesting the READ_MEDIA_VIDEO permission without the READ_MEDIA_IMAGES permission, only videos would appear in the UI for users to select files.

// Allowing the user to select only videos requestPermissions.launch(arrayOf(READ_MEDIA_VIDEO, READ_MEDIA_VISUAL_USER_SELECTED))

You can check if your app has full, partial or denied access to the device’s photo library and update your UX accordingly. It's even more important now to request these permissions when the app needs storage access, instead of at startup. Keep in mind that the permission grant can be changed between the onStart and onResume lifecycle callbacks, as the user can change the access in the settings without closing your app.

if ( Build.VERSION.SDK_INT >= Build.VERSION_CODES.TIRAMISU && ( ContextCompat.checkSelfPermission(context, READ_MEDIA_IMAGES) == PERMISSION_GRANTED || ContextCompat.checkSelfPermission(context, READ_MEDIA_VIDEO) == PERMISSION_GRANTED ) ) { // Full access on Android 13+ } else if ( Build.VERSION.SDK_INT >= Build.VERSION_CODES.UPSIDE_DOWN_CAKE && ContextCompat.checkSelfPermission(context, READ_MEDIA_VISUAL_USER_SELECTED) == PERMISSION_GRANTED ) { // Partial access on Android 14+ } else if (ContextCompat.checkSelfPermission(context, READ_EXTERNAL_STORAGE) == PERMISSION_GRANTED) { // Full access up to Android 12 } else { // Access denied }

Once you verified you have access to the right storage permissions, you can interact with MediaStore to query the device library (whether the granted access is partial or full):

data class Media( val uri: Uri, val name: String, val size: Long, val mimeType: String, val dateTaken: Long ) // We run our querying logic in a coroutine outside of the main thread to keep the app responsive. // Keep in mind that this code snippet is querying all the images of the shared storage suspend fun getImages(contentResolver: ContentResolver): List<Media> = withContext(Dispatchers.IO) { val projection = arrayOf( Images.Media._ID, Images.Media.DISPLAY_NAME, Images.Media.SIZE, Images.Media.MIME_TYPE, ) val collectionUri = if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.Q) { // This allows us to query all the device storage volumes instead of the primary only Images.Media.getContentUri(MediaStore.VOLUME_EXTERNAL) } else { Images.Media.EXTERNAL_CONTENT_URI } val images = mutableListOf<Media>() contentResolver.query( collectionUri, projection, null, null, "${Images.Media.DATE_ADDED} DESC" )?.use { cursor -> val idColumn = cursor.getColumnIndexOrThrow(Images.Media._ID) val displayNameColumn = cursor.getColumnIndexOrThrow(Images.Media.DISPLAY_NAME) val sizeColumn = cursor.getColumnIndexOrThrow(Images.Media.SIZE) val mimeTypeColumn = cursor.getColumnIndexOrThrow(Images.Media.MIME_TYPE) while (cursor.moveToNext()) { val uri = ContentUris.withAppendedId(collectionUri, cursor.getLong(idColumn)) val name = cursor.getString(displayNameColumn) val size = cursor.getLong(sizeColumn) val mimeType = cursor.getString(mimeTypeColumn) val dateTaken = cursor.getLong(4) val image = Media(uri, name, size, mimeType, dateTaken) images.add(image) } } return@withContext images }

The code snippet above is simplified to illustrate how to interact with MediaStore. In a proper production app, you should consider using pagination with something like the Paging library to ensure good performance.

You may not need permissions

As of Android 10 (API 29), you no longer need storage permissions to add files to shared storage. This means that you can add images to the gallery, record videos and save them to shared storage, or download PDF invoices without having to request storage permissions. If your app only adds files to shared storage and does not query images or videos, you should stop requesting storage permissions and set a maxSdkVersion of API 28 in your AndroidManifest.xml:

<!-- No permission is needed to add files from Android 10 --> <uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" android:maxSdkVersion="28" />

ACTION_GET_CONTENT behavior change

In our last storage blog post, we announced that we’ll be rolling out a behavior change whenever ACTION_GET_CONTENT intent is launched with an image and/or video mime type. If you haven’t tested yet this change, you can enable it manually on your device:

adb shell device_config put storage_native_boot take_over_get_content true

That covers how to offer visual media selection in your app with the privacy-preserving changes we've made across multiple Android releases.If you have any feedback or suggestions, submit tickets to our issue tracker.

Introducing Jetpack Emoji Picker: A New Way to Add Emojis to Your Android App

Posted by Lin Guo, Software Engineer

The use of emojis in communication has become increasingly popular in recent years. These small icons can be used to express a wide range of emotions and can add a personal touch to messages. However, adding emojis to your Android app can be a bit of a challenge. That's where the Emoji picker library comes in. You can simply add a few lines of code to your app, and you'll be able to start using emojis right away. It's the easiest way to get started with emojis, and it will make your app more fun and expressive.

Moving image of using EmojiPicker on Google Pixel 6 Pro
Figure 1. Emoji Picker

Some useful features provided by the library

Up-to-date emojis without tofu (☐)

Every year, new emoji versions are published, and we will regularly update the library to provide these new emojis. Higher-end phones will be able to render these newer emojis without any problem. For lower-end phones, newer emoji may be displayed as a small square box called tofu (☐). The library guarantees to detect and remove them. This ensures the library is compatible across multiple Android versions/devices.

Smooth UI

The library has several optimizations that attempt to reduce startup latency and speed up scrolling experience, such as caching renderable emojis, drawing emojis asynchronously and RecyclerView optimizations.

Personalized inclusive experience

User selections are persistent in the library. Emojis that are newly chosen will be shown at the top row, making it simpler for users to find and share them. The library also offers a variety of emojis that represent different people and cultures in the variant panels. If the user chooses an emoji from one of the variation panels (Figure 2), the choice is retained and set as the default in the main panel.

Image showijng diversity of characters to choose from in EmojiPicker
Figure 2. Emoji variants

Integrate emoji picker into your app in 3 steps

Step 1: Import the library in build.gradle 
dependencies { implementation "androidx.emoji2:emojipicker:$version" }

Step 2: Inflate the EmojiPickerView

Optionally set emojiGridColumns and emojiGridRows based on the desired size of each emoji cell

An example that uses EmojiPickerView in XML
<androidx.emoji2.emojipicker.EmojiPickerView app:emojiGridColumns="9" />

A very simple emoji picker should now be presented on your app! For the next step, we assume you would like to do something to the picked emoji.


Step 3: Provide listener to the picked emoji
// a listener example emojiPickerView.setOnEmojiPickedListener { findViewById<EditText>(R.id.edit_text).append(it.emoji) }

Now you have a basic functioning emoji picker. To customize it further (e.g, override some styles or provide a different behavior to the recent emoji row), please refer to our api and sample app.

Feel free to file Bug Report or Feature Request to help us improve the library!

Try the K2 compiler in your Android projects

Posted by Márton Braun, Developer Relations Engineer

The Kotlin compiler is being rewritten for Kotlin 2.0. The new compiler implementation–codenamed K2–brings with it significant build speed improvements, compiling Kotlin code up to twice as fast as the original compiler. It also has a more flexible architecture that will enable the introduction of new language features after 2.0.

Try the new compiler

With Kotlin 1.9, K2 is now available in Beta for JVM targets, including Android projects. To help stabilize the new compiler and make sure you’re ready for Kotlin 2.0, we encourage you to try compiling your projects with the new compiler. If you run into any issues, you can report them on the Kotlin issue tracker.

To try the new compiler, update to Kotlin 1.9 and add the following to your project’s gradle.properties file:
kotlin.experimental.tryK2=true

Note that the new compiler should not be used for production builds yet. A good approach for trying it early is to create a separate branch in your project for compiling with K2. You can find an example of this in the Now in Android repository.

Tooling support

Plugins and tools that depend on the Kotlin compiler frontend will also have to be updated to add support for K2. Some tools already have experimental support for building with K2: the Jetpack Compose compiler plugin supports K2 starting in 1.5.0, which is compatible with Kotlin 1.9.

Android Lint also supports K2 starting in version 8.2.0-alpha12. To run Lint on K2, upgrade to this version and add android.lint.useK2Uast=true to your gradle.properties file. Note that any custom lint rules that rely on APIs from the old frontend will have to be updated to use the analysis API instead.

Adding K2 support in other tools is still in progress: KSP and KAPT tasks currently fall back to using the old compiler when building your project with K2. However, compilation tasks can still run using K2 when these tools are used.

Android Studio also relies on the Kotlin compiler for code analysis. Until Android Studio has support for K2, building with K2 might result in some discrepancies between the code analysis of the IDE and command line builds in certain edge cases.

If you use any additional compiler plugins, check their documentation to see whether they are compatible with K2 yet.

Get started with the K2 compiler today

The Kotlin 2.0 Compiler offers significant improvements to help you ship updates faster, be more productive, and spend more time focusing on what makes your app unique.

It already works with Jetpack Compose and we have a roadmap to improve support in other tools, including Android Studio, KSP, and compiler plugins. Now is a great time to try it in your app's codebase and provide feedback related to Kotlin, Compose, or Lint.

Credential Manager beta: easy & secure authentication with passkeys on Android

Posted by Diego Zavala, Product Manager, and Niharika Arora, Android Developer Relations Engineer

Today, we are excited to announce the beta release of Credential Manager with a finalized API surface, making it suitable for use in production. As we previously announced, Credential Manager is a new Jetpack library that allows app developers to simplify their users' authentication journey, while also increasing security with support of passkeys.

Authentication provides secure access to personalized experiences, but it has challenges. Passwords, which are widely used today, are difficult to use, remember and are not always secure. Many applications and services require two-factor authentication (2FA) to login, adding more friction to the user's flow. Lastly, sign-in methods have proliferated, making it difficult for users to remember how they signed in. This proliferation has also added complexity for developers, who now need to support multiple integrations and APIs.

Credential Manager brings support for passkeys, a new passwordless authentication mechanism, together with traditional sign-in methods, such as passwords and federated sign-in, into a single interface for the user and a unified API for developers.


image showing end-to-end journey to sign in using a passkey on a mobile device
End-to-end journey to sign in using a passkey

With Credential Manager, users will benefit from seeing all their credentials in one place; passkeys, passwords and federated credentials (such as Sign in with Google), without needing to tap three different places. This reduces user confusion and simplifies choices when logging in.


image showing the unified account selector that support multiple credential types across multiple accounts on a mobile device
Unified account selector that support multiple credential types across multiple accounts

Credential Manager also makes the login experience simpler by deduping across sign-in methods for the same account and surfacing only the safest and simplest authentication method, further reducing the number of choices users need to make. So, if a user has a password and a passkey for a single account, they won’t need to decide between them when signing in; rather, the system will propose using the passkey - the safest and simplest option. That way, users can focus on choosing the right account instead of the underlying technology.


image showing how a passkey and a password for the same account are deduped on a mobile device
A passkey and a password for the same account are deduped

For developers, Credential Manager supports multiple sign-in mechanisms within a single API. It provides support for passkeys on Android apps, enabling the transition to a passwordless future. And at the same time, it also supports passwords and federated sign in like Sign in With Google, simplifying integration requirements and ongoing maintenance.

Who is already using Credential Manager?

Kayak has already integrated with Credential Manager, providing users with the advantages of passkeys and simpler authentication flows.

"Passkeys make creating an account lightning fast by removing the need for password creation or navigating to a separate app to get a link or code. As a bonus, implementing the new Credential Manager library also reduced technical debt in our code base by putting passkeys, passwords and Google sign-in all into one new modern UI. Indeed, users are able to sign up to Kayak with passkeys twice as fast as with an email link, which also improves the sign-in completion rate."  

– Matthias Keller, Chief Scientist and SVP, Technology at Kayak 

Something similar is observed on Shopify

“Passkeys work across browsers and our mobile app, so it was a no-brainer decision for our team to implement, and the resulting one-tap user experience has been truly magical. Buyers who are using passkeys to log in to Shop are doing so 14% faster than those who are using other login methods (such as email or SMS verification)”

– Mathieu Perreault, Director of Engineering at Shopify

Support for multiple password managers

Credential Manager on Android 14 and higher supports multiple password managers at the same time, enabling users to choose the provider of their choice to store, sync and manage their credentials. We are excited to be working with several leading providers like Dashlane on their integration with Credential Manager.

“Adopting passkeys was a no-brainer for us. It simplifies sign-ins, replaces the guesswork of traditional authentication methods with a reliable standard, and helps our users ditch the downsides of passwords. Simply put, it’s a big win for both us and our users. Dashlane is ready to serve passkeys on Android 14!”

– Rew Islam, Director of Product Engineering and Innovation at Dashlane

Get started

To start using Credential Manager, you can refer to our integration guide.

We'd love to hear your input during this beta release, so please let us know about your experience integrating with Credential Manager, using passkeys, or any other feedback you might have:

Prepare your app for the new Samsung tablets, foldables and watches

Posted by the Android team

From foldable innovations to seamless connectivity, Google and Samsung have continued to work together to create helpful experiences across Android phones, tablets, smartwatches and more. Today at Galaxy Unpacked in Seoul, Samsung unveiled the new Galaxy Z Flip5 and Z Fold5, Galaxy Watch6 series, and Galaxy Tab S9 series.

With these new devices from Samsung, there are four more reasons to ensure your app looks great across all your user’s favorite screens. Here are three ways you can ensure your app is ready for these great new Samsung devices:

1. Provide a great foldable experience

The launch of the new Galaxy Z Flip5 and Z Fold5 brings two brand new foldables to the Android ecosystem, so it is important to provide experiences that have fully adaptive UIs. The bottom line is that layout and app behavior should be based on device configuration and available features, and not the physical type of the device.

When it comes to providing a great foldable experience, here are a few of our top recommendations:

Illustration of Window Class sizes showing compact, medium, and expanded sizes across widths from 600dp through 840 dp

  • Use window size classes to guide layout decisions based on your current windowing state using opinionated breakpoints that are derived from common device types.
  • Observe folding features with Jetpack WindowManager, which provides the set of folding features that intersect your app's current window.
  • Make dynamic, runtime decisions based on whether a feature is available, instead of assuming that a feature is or is not available for a certain kind of device.
  • Referring in the UI to the user’s device as simply a “device” covers all form factors and is the simplest to implement. However, differentiating between the multiple devices a user may have provides a more polished experience and enables you to display the type of the device to the user using heuristics relevant to your particular use case.

You can learn more about how (and why) to implement the recommendations above in this detailed blog and, to find best practices for updating your app, check out the Support different screen sizes page.

2. Design with multi-device experiences in mind

With new devices, big and small, it is important to think through the user experience you hope to accomplish. A large part of that is the UI and design of your app – with specific consideration to account for based on screen sizes and types.

Ensuring your app looks great on large screens is a critical part of your users’ experience. Material You supports beautiful, efficient tablet and foldable experiences – and, at Google I/O this year, the team dove into the latest updates to large screen guidelines for designers and developers. You can also get inspired with the latest design guidance and mockup in check out the Large Screens Gallery.

To help with the challenges of designing and building great watch experiences that work for all, we created our the Wear OS Gallery This blog and the series of videos that accompany it are built to get you started designing inclusive smartwatch apps. For even more information on beautiful smartwatch design, discover the new Wear OS Gallery where you can find general design tips, verticalized use cases, and implementation ideas.

3. Get ready for Wear OS 4

The next generation of Wear OS is here! The Galaxy Watch6 series comes with the newest version of Google’s smartwatch platform, Wear OS 4. This platform update is also coming soon to other Samsung Galaxy watches, including the Watch4 and Watch5.

Wear OS 4 is based on Android 13, which is several versions newer than the current Wear OS version, so your app will need to handle the system behavior changes that took effect in Android 12 and Android 13. We recommend you start by testing your app and releasing a compatible update first – as devices get upgraded to Wear OS 4, it’s a basic but a critical level of quality that provides a good app experience for users.

Download the Wear OS 4 emulator in Android Studio Hedgehog to explore new features and test your app on Wear OS 4 Developer Preview.

The release of Wear OS 4 comes with many exciting changes – including a new way to build watchfaces.

The new Watch Face Format is a declarative XML format that allows you to configure the appearance and behavior of watch faces. This means that there's no executable code involved in creating a watch face, and there's no code embedded in your watch face APK. The Wear OS platform takes care of the logic needed to render the watch face so you can focus on your creative ideas, rather than code optimizations or battery performance.

Get started with watch faces using our documentation or create your own watch face with Samsung’s Watch Face Studio design tool.

Get started building a multi-device experience today!

With all the amazing additions to the Android ecosystem coming from Galaxy Unpacked, there has never been a better time to be sure your app looks great on all the devices your users know and love - from tablets to foldables to watches.

Learn more about building multi-device experiences from Deezer, where they increased their monthly active users 4X after improving multi-device support. Then get started with Jetpack WindowManager to help you build a responsive app for large screens by checking out the documentation and sample app. Finally, get to know Wear OS 4 and try it out with your app!

Deezer increased its monthly active users 4X after improving multi-device support

Posted by the Android team

Deezer is a global music streaming platform that provides users access to over 110 million tracks. Deezer aims to make its application easily accessible, letting users listen to their audio when, where, and how they want. With the growing popularity of Wear OS devices and large screens and foldables, the Deezer team saw an opportunity to give its users more ways to stream by enhancing its multi-device support.

We’re focused on delivering a consistently great UX on as many devices as possible.” — Hugo Vignaux, senior product manager at Deezer

Increasing smart watch support

Over the past few years, users increasingly requested Deezer to make its app available on Wear OS. During this time, the Deezer team had also seen rapid growth in the wearable market.

“The Wear OS market was growing thanks to the Fitbit acquisition by Google, the Pixel watch announcement, and the switch to Wear OS on Galaxy watches,” said Hugo Vignaux, a senior product manager at Deezer. “It was perfect timing because Google raised the opportunity with us to invest in Wear OS by joining the Media Experience Program in 2022.”

Deezer’s developers initially focused on providing instant, easy access to users’ personalized playlists from the application. To do this, engineers streamlined the app’s Wear OS UI, making it easier for users to control the app from their wrist. They also implemented a feature that allowed users to download their favorite Deezer playlists straight to their smartwatches, making offline playback possible without requiring a phone or an internet connection.

The Deezer team relied on Google’s Horologist and its Media Toolkit during development. Horologist and its libraries guided the team and ensured updates to the UI adhered to Wear best practices. It also made rolling out features like audio and bluetooth management much easier.

“The player view offered by the Media Toolkit was a source of inspiration and guaranteed that the app’s code quality was up to par,” said Hugo. “It also allowed us to focus on unit testing and resiliency rather than developing new features from scratch.”

More support for large screens and foldables

Before updating the app, Deezer’s UX wasn’t fully optimized for large screens and foldables. With this latest update, Deezer developers created special layouts for multitasking on large screens, like tablets and laptops, and used resizable emulators to optimize the app’s resizing capabilities for each screen on foldables.

“Supporting large screens means we can better fit multiple windows on a screen,” said Geoffrey Métais, engineering manager at Deezer. “This allows users to easily switch between apps, which is good because Deezer doesn’t require a user's full attention for them to make use of its UI.”

On tablets, Deezer developers split pages that were displayed vertically to be displayed horizontally. Developers also implemented a navigation rail and turned some lists into grids. These simple quality-of-life updates improved UX by giving users an easier way to click through the app.

Making these changes was easy for developers thanks to the Jetpack WindowManager library. “The WindowManager library made it simple to adapt our UI to different screen sizes,” said Geoffrey. “It leverages Jetpack Compose’s modularity to adapt to any screen size. And Compose code stays simple and consistent despite addressing a variety of different configurations.”

Updates to large screens and foldables and Wear OS were all created using Jetpack Compose and Compose for Wear OS, respectively. With Jetpack Compose, Deezer developers were able to efficiently create and implement a design system that focused on technical issues within the new app. The Deezer team attributes their increased productivity with Compose to Composable functions, which lets developers reuse code segments, and Android Studio, which helps developers iterate on features faster.

“The combination of a proper Design System with Jetpack Compose’s modularity and reactive paradigms is a very smart and efficient solution to improve usability without losing development productivity,” said Geoffrey.

'With the new capabilities of Wear OS 3, we’ve optimized the Deezer experience for the next generation of smartwatches, letting our users listen to their music anywhere, anytime.' — Hugo Vignaux, senior product manager at Deezer

The impact of increased multi-device support

Increasing multi-device support was easy for Deezer developers thanks to the tools and resources offered by Google. The updates the Deezer team made across screens improved the app’s UI, making it easier for users to navigate the app and listen to audio on their own terms.

Since updating for Wear OS and other Android devices, the Deezer team saw a 4X increase in user engagement and received positive feedback from its community.

“Developing for WearOS and across devices was great thanks to the help of the Google team and the availability of libraries and APIs that helped us deliver some great features, such as Horologist and its Media Toolkit. All those technical assets were very well documented and the Google team’s dedication was tremendous,” said Hugo.

Get started

Learn how you can start developing for Wear OS and other Android devices today.

Deep dive into Live Edit for Jetpack Compose UI

Posted by Alan Leung, Staff Software Engineer, Fabien Sanglard, Senior Software Engineer, Juan Sebastian Oviedo, Senior Product Manager
A closeup look into how the Android Studio team built Live Edit; a feature that accelerates the Compose development process by continuously updating the running application as code changes are made.

What’s Live Edit and how can it help me?

Live Edit introduces a new way to edit your app’s Jetpack Compose UI by instantly deploying code changes to the running application on a physical device or emulator. This means that you can make changes to your app’s UI and immediately see their effect on the running application, enabling you to iterate faster and be more productive in your development. Live Edit was recently released to the stable channel with Android Studio Giraffe and can be enabled in the Editor settings. Developers like Plex and Pocket Casts are already using Live Edit and it has accelerated their development process for Compose UI. It is also helping them in the process of migrating from XML views to Compose.


Moving image illustrating Live Edit in action on Android Studio Hedgehog
Live Edit in action on Android Studio Hedgehog

When should I use Live Edit?

Live Edit is a different feature from Compose Preview and Apply Changes. These features provide value in different ways:

Feature

Description

When should I use it?

Live Edit[Kotlin only, supports live recomposition] Make changes to your Compose app’s UI and immediately see their effect on the running application on an emulator or physical device. Quickly see the effect of updates to UX elements (such as modifier updates and animations) on the overall app experience while the application is running.
Compose Preview

[Compose only] Visualize Compose elements in the Design tab within Android Studio and see them automatically refresh as you make code changes. Preview individual Compose elements in one or many different configurations and states, such as dark theme, locales, and font scale.
Apply Changes

Deploy code and resource updates to a running app without restarting it—and, in some cases, without restarting the current activity. Update code and resources in a non-Compose app without having to redeploy it to an emulator or physical device.

How does it work?

At a high level, Live Edit does the following:

  1. Detects source code changes.
  2. Compiles classes that were updated.
  3. Pushes new classes to the device.
  4. Adds a hook in each class method bytecode to redirect calls to the new bytecode.
  5. Edits the app classpath to ensure changes persist even if the app is restarted.

Illustration of Live Edit architechture
Live Edit architecture

Keystroke detection

This step is handled via the Intellij IDEA Program Structure Interface (PSI) tree. Listeners allow LE to detect the moment a developer makes a change in the Android Studio editor.

Compilation

Fundamentally, Live Edit still relies on the Kotlin compiler to generate code for each incremental change.

Our goal was to create a system where there is less than 250ms latency between the last keystroke and the moment the recomposition happens on the device. Doing a typical incremental build or invoking an external compiler in a traditional sense would not achieve our performance requirement. Instead, Live Edit leverages Android Studio’s tight integration with the Kotlin compiler.

On the highest level, the Kotlin compiler’s compilation can be divided into two stages.

  • Analysis
  • Code generation

The analysis performed as the first step is not entirely restricted to a build process. In fact, the same step is frequently done outside the build system as part of an IDE. From basic syntax checking to auto-complete suggestions, the IDE is constantly performing the same analysis (Step 1 of Diagram 1) and caching the result to provide Kotlin- and Compose-specific functionality to the developer. Our experiment shows that the majority of the compilation time is spent in the analysis stage during build. Live Edit uses that information to invoke the Compose compiler. This allows compilation to happen within 200ms using typical laptops used by developers. Live Edit further optimizes the code generation process and focuses solely on generating code that is only necessary to update the application.

The result is a plain .class file (not a .dex file) that is passed to the next step in the pipeline, desugaring.

How to desugar

When Android app source code is processed by the build system, it is usually “desugared” after it is compiled. This transformation step lets an app run on a set of Android versions devoid of syntactic sugar support and recent API features. This allows developers to use new APIs in their app while still making it available to devices that run older versions of Android.

There are two kinds of desugaring, known as language desugaring and library desugaring. Both of these transformations are performed by R8. To make sure the injected bytecode will match what is currently running on the device, Live Edit must make sure each class file is desugared in a way that is compatible with the desugaring done by the build system.

Language desugaring:

This type of bytecode rewrite aims to provide newer language features for lower targeted API level devices. The goal is to support language features such as the default interface method, lambda expression, method reference, and so on, allowing support down to the min API level. This value is extracted from the .apk file's DEX files using markers left in there by R8.

API desugaring:

Also known as library desugaring, this form of desugaring aims to support JAVA SDK methods and classes. This is configured by a JSON file. Among other things, method call sites are rewritten to target functions located in the desugar library (which is also embedded in the app, in a DEX file). To perform this step, Gradle collaborates with Live Edit by providing the JSON file used during library desugaring.

Function trampoline

To facilitate a rapid “per-key-stroke” speed update to a running application, we decided to not constantly utilize the JVMTI codeswap ability of the Android Runtime (ART) for every single edit. Instead, JVMTI is only used once to perform a code swap that installs trampolines onto a subset of methods within the soon-to-be modified classes inside the VMs. Utilizing something we called the “Primer” (Step 3 of Diagram 1), invocation of the methods is redirected to a specialized interpreter. When the application no longer sees updates for a period of time, Live Edit will replace the code with traditional DEX code for performance benefits of ART. This saves developers time by immediately updating the running application as code changes are made.

Illustration of Function trampoline process
Function trampoline process

How code is interpreted

Live Edit compiles code on the fly. The resulting .class files are pushed, trampolined (as previously described), and then interpreted on the device. This interpretation is performed by the LiveEditInterpreter. The interpreter is not a full VM inside ART. It is a Frame interpreter built on top of ASM Frame. ASM Frame handles the low level logistics such as the stack/local variables's push/load, but it needs an Interpreter to actually execute opcode. This is what the OpcodeInterpreter is for.

Flow chart of Live Edit interpretation
Live Edit interpretation flow

Live Edit Interpreter is a simple loop which drives ASM/Interpreter opcodes interpretation.

Some JVM instructions cannot be implemented using a pure Java interpreter (in particular invokeSpecial and monitorEnter/Exit are problematic). For these, Live Edit uses JNI.

Dealing with lambdas

Lambdas are handled in a different manner because changes to lambda captures can result in changes in VM classes that are different in many method signatures. Instead, new lambda-related updates are sent to the running device and loaded as new classes instead of redefining any existing loaded classes as described in the previous section.

How does recomposition work?

Developers wanted a seamless and frictionless new approach to program Android applications. A key part of the Live Edit experience is the ability to see the application updated while the developer continuously writes code, without having to explicitly trigger a re-run with a button press. We needed a UI framework that has the ability to listen to model changes within the application and perform optimal redraws accordingly. Luckily, Jetpack Compose fits this task perfectly. With Live Edit, we added an extra dimension to the reactive programming paradigm where the framework also observes changes to the functions’ code.

To facilitate code modification monitoring, the Jetpack Compose compiler supplies Android Studio with a mapping of function elements to a set of recomposition group keys. The attached JVMTI agent invalidates the Compose state of a changed function in an asynchronous manner and the Compose runtime performs recomposition on Composables that are invalidated.

How we handle runtime errors during recomposition

Moving image of Live edit handling a runtime error
Live Edit handling a runtime error

While the concept of a continuously updating application is rather exhilarating, our field studies showed that sometimes when developers are writing code, the program can be in an incomplete state where updating and re-executing certain functions would lead to undesirable results. Besides the automatic mode where updates are happening almost continuously, we have introduced two manual modes for the developer who wants a bit more control on when the application gets updated after new code is detected.

Even with that in mind, we want to make sure common issues caused by executing incomplete functions do not cause the application to terminate prematurely. Cases where a loop’s exit condition is still being written are detected by Live Edit to avoid an infinite loop within the program. Also, if a Live Edit update triggers recomposition and causes a runtime exception to be thrown, the Compose runtime will catch such an exception and recompose using the last known good state.

Consider the following piece of code:

var x = y / 10

Suppose the developer would like to change 10 to 50 by deleting the character 1 and inserting character 5 after. Android Studio could potentially update the application before the 5 is inserted and thus create a division-by-zero ArithmeticException. However, with the added error handling mentioned, the application would simply revert to “y / 10” until further updates are done in the editor.

What’s coming?

The Android Studio team believes Live Edit will change how UI code is written in a positive way and we are committed to continuously improve the Live Edit development experience. We are working on expanding the types of edits developers can perform. Furthermore, future versions of Live Edit will eliminate the need to invalidate the whole application during certain scenarios.

Additionally, PSI event detection comes with limitations such as when the user edits import statements. To solve this problem, future versions of Live Edit will rely on .class diffing to detect changes. Lastly, the full persisting functionality isn't currently available. Future versions of Live Edit will allow the application to be restarted outside of Android Studio and retain the Live Edit changes.

Get started with Live Edit

Live Edit is ready to be used in production and we hope it can greatly improve your experience developing for Android, especially for UI-heavy iterations. We would love to hear more about your interesting use cases, best practices and bug reports and suggestions.

Java is a trademark or registered trademark of Oracle and/or its affiliates.