Category Archives: Android Developers Blog

An Open Handset Alliance Project

Meet the Android Studio Team: A Conversation with Director of Product Management, Jamal Eason

Posted by Ashley Tschudin – Social Media Specialist, MTP at Google

Dive into the world of Android Studio and meet the masterminds behind your favorite development tools! In our recurring blog series, "Meet the Android Studio Team," we'll introduce you to the brilliant engineers, designers, product managers, and more who are shaping the future of Android development.

Join us each week to uncover the unique perspectives and stories of the people who make Android Studio the best it can be.


Jamal Eason: Building better Android apps - insights on Gemini, Crashlytics, and App Quality

Meet Jamal Eason, a Director of Product Management at Google, whose passion for empowering developers shines through in his work on Android Studio.

His journey, from studying computer science at West Point to developing Android hardware at Intel (including contributions to the Motorola Razr i), showcases a deep understanding of the developer experience. From attending the very first Android Studio unveiling at Google I/O to now shaping its future, Jamal brings a unique perspective to the team.

Jamal shares his insights on the evolution of Android Studio, the importance of a strong developer community, and the features he's most proud of.


Can you tell us about your journey to becoming a part of the Android Studio team? What sparked your interest in Android development?

I have had an interest in programming at an early age especially since studying computer science in undergrad at the United States Military Academy (West Point), and in that time I have had an interest not just in the creation of software but also in the tools developers use to make software.

My interest in Android development came when I was preparing for my first job after my telecommunications & computer networks military career when I was joining a team at the Intel Corporation that worked with Google to build Android hardware products. I thought the best way to understand Google and mobile was to download the Android SDK and create my own app end to end. My first taste of Android was Froyo 2.2 using the Eclipse based Android Developer Tools IDE.

At Intel, I worked on creating the x86 based version of the Android Emulator and Emulator system image, and also a new Hypervisor that would accelerate the performance of the Android Emulator on x86 based laptops. After helping ship the Motorola Razr i (xt890) Android phone with Intel technology inside and x86 optimized apps on the device, I made the move to the Android team at Google. With my experience in developing Android apps, and shipping Android developer tools, the Android developer tools team was a natural fit.

Interestingly, I attended the Google I/O the year Android Studio was first revealed as an attendee, and the following year I was working on the team to bring Android Studio to its Beta release at the following years Google I/O.

What unique perspective or experience do you bring to the Android Studio team, and how does it influence your work?

Unique experiences I bring include:

  • Technical Translation - In my prior roles, I worked with highly technical teams, and learned how to take absurd technical concepts and present them to different audiences of different technical skill levels. And in the reverse, I worked with many non-technical customers and colleagues and learned how to translate their pain points into product opportunities solved with technical solutions and innovation.
  • User Empathy - Previously, I was a software developer, and I regularly like to code on small side projects, and really enjoy spending time with developers who use Android Studio. From first-hand experience and user engagement, I regularly bring in the voice of the user into the discussion from the inception of a product idea to the final stages of the release process.
  • UX Design Sense - In a previous career, I designed and created websites, and user interfaces for software. I developed an eye for good UX design and flows particularly in technical software products. These skills aid in complementing the dedicated UX design team in Android Studio, and aids in avoiding productivity pitfalls with poor product and UX flows.

In your opinion, what is the most impactful feature or improvement the Android team has introduced in recent years, and why?

It’s hard to nail down just one, but the top three are:

    1) product quality

    2) integration of Gemini and

    3) integrations with Crashlytics and Play with App Quality Insights.

The most impactful feature we worked on is product quality. We treat quality, especially the core code editing experience as a feature. If a developer can’t write a line of code and deploy it to a device, then everything else is secondary. Since Android is always evolving, it is an on-going effort but critical for the team to stay focused on.

On top of quality, thoughtful integration of Gemini into Android Studio is a real accelerate for app development. Our focus with AI is to make Android developers more productive, and make the harder tasks and toil easier. So from AI powered code completion, or built-in Gemini chat for Android app development, to enhancing existing tools with AI such as using Gemini to generate Jetpack Compose UI Previews, we are just at the beginning of leveraging AI to make Android app developers more productive.

Lastly, with App Quality Insights, it is now much easier for app developers to address the performance and quality issues found with Firebase Crashlytics and Android Vitals from Google Play. Surfacing these issues right next to source code and source control, make resolving issues much faster and intuitive.

How does the Android Studio team ensure that products or features meet the ever-changing needs of developers?

First step, the Android Studio team works hand-in-hand with the Android OS team so we strive to deliver developer tools in concert with new Android OS and API changes so developers are ready to adopt new Android platform capability into their apps. Then, we constantly review and prioritize developer feedback received via our issue tracker or via our bi-annaul developer survey we post on the Android Developers site. When we can, we sometimes engage with developers via various social media channels. And lastly, we regularly interview developers at various experience levels, and regions around the world in targeted User Research studies.

What advice would you give to aspiring Android developers who are just starting their journey?

  1. Start with a robust set of code labs and tutorials.
  2. Get inspired on the possibilities of Android and what you can build.
  3. Join the Android developer community:

Deploy with Confidence

Inspired by Jamal's journey and dedication to empowering developers? Explore the latest Android Studio features, including App Quality Insights, to improve your app's performance and address issues quickly.

Stay tuned

Don't miss the next installment of our "Meet the Android Studio Team" series, where we'll introduce you to another amazing member of our team and share their unique journey. Stay tuned for more!

Find Jamal Eason on LinkedIn and X.

Meet the Android Studio Team: A Conversation with Product Manager, Paris Hsu

Posted by Ashley Tschudin – Social Media Specialist, MTP at Google

Welcome to "Meet the Android Studio Team"; a short blog series where we pull back the curtain and introduce you to the passionate people who build your favorite Android development tools. Get to know the talented minds – engineers, designers, product managers, and more – who pour their hearts into crafting the best possible experience for Android developers.

Join us each week to meet a new member of the team and explore their unique perspectives.


Paris Hsu: Empowering Android developers with Compose tools

Meet Paris Hsu, a Product Manager at Google passionate about empowering developers to build incredible Android apps.

Her journey to the Android Studio team started with a serendipitous internship at Microsoft, where she discovered the power of developer tools. Now, as part of the UI Tools team, Paris champions intuitive solutions that streamline the development process, like the innovative Compose Tools suite.

In this installment of "Meet the Android Studio Team," Paris shares insights into her work, the importance of developer feedback, and her dream Android feature (hint: it involves acing that forehand).


Can you tell us about your journey to becoming a part of the Android Studio team? What sparked your interest in Android development?

Honestly, I joined a bit by chance! The summer before my last year of grad school, I was in the Microsoft's Garage incubator internship program. Our project, InkToCode, turned handwritten designs into code. It was my first experience building developer tools and made me realize how powerful developer tools can be, which led me to the Android Studio team. Now, after 6 years, I'm constantly amazed by what Android developers create – from innovative productivity apps to immersive games. It's incredibly rewarding to build tools that empower developers to create more.

In your opinion, what is the most impactful feature or improvement the Android Studio team has introduced in recent years, and why?

As part of the UI Tools team in Android Studio, I'm biased towards Compose Tools! Our team spent a lot of time rethinking how we can take a code-first approach for tools as we transition the community for XML to Compose. Features like the Compose Preview and its submodes (Interactive, Animation, Deploy preview) enable fast UI iteration, while features such as Layout Inspector or Compose UI Check helps find and diagnose UI issues with ease. We are also exploring ways to apply multimodal AI into these tools to help developers write more high quality, adaptive, and inclusive Compose code quicker.

How does the Android Studio team ensure that products or features meet the ever-changing needs of developers?

We are constantly engaging and listening to developer feedback to ensure we are meeting their needs! some examples:

    • Direct feedback: UXR studies, Annual developer surveys, and Buganizer reports provide valuable insights.
    • Early access: We release Early Access Programs (EAPs) for new features, allowing developers to test them and provide feedback before official launch.
    • Community engagement: We have advisory boards with experienced Android developers, gather feedback from Google Developer Experts (GDEs), and attend conferences to connect directly with the community.

How does the Studio team contribute to Google's broader vision for the Android platform?

I think Android Studio contributes to Google's broader mission by providing Android developers with powerful and intuitive tools. This way, developers are empowered to create amazing apps that bring the best of Google's services and information to our users. Whether it's accessing knowledge through Search, leveraging Gemini, staying connected with Maps, or enjoying entertainment on YouTube, Android Studio helps developers build the experiences that connect people to what matters most.

If you could wave a magic wand and add one dream feature to the Android universe, what would it be and why?

Anyone who knows me knows that I am recently super obsessed with tennis. I would love to see more coaching wearables (e.g. Pixel Watch, Pixel Racket?!). I would love real-time feedback on my serve and especially forehand stroke analysis.

Learn more about Compose Tools

Inspired by Paris’ passion for empowering developers to build incredible Android apps? To learn more about how Compose Tools can streamline your app development process, check out the Compose Tools documentation and get started with the Jetpack Compose Tutorial.

Stay tuned

Keep an eye out for the next installment in our “Meet the Android Studio Team” series, where we’ll shine the spotlight on another team member and delve into their unique insights.

Find Paris Hsu on LinkedIn, X, and Medium.

Production-ready generative AI on Android with Vertex AI in Firebase

Posted by Thomas Ezan – Sr. Developer Relation Engineer (@lethargicpanda)

Gemini can help you build and launch new user features that will boost engagement and create personalized experiences for your users.

The Vertex AI in Firebase SDK lets you access Google’s Gemini Cloud models (like Gemini 1.5 Flash and Gemini 1.5 Pro) and add GenAI capabilities to your Android app. It became generally available last October which means it's now ready for production and it is already used by many apps in Google Play.

Here are tips for a successful deployment to production.

Implement App Check to prevent API abuse

When using the Vertex AI in Firebase API it is crucial to implement robust security measures to prevent unauthorized access and misuse.

Firebase App Check helps protect backend resources (like Vertex AI in Firebase, Cloud Functions for Firebase, or even your own custom backend) from abuse. It does this by attesting that incoming traffic is coming from your authentic app running on an authentic and untampered Android device.

A flow diagram illustrating App Check, with green lines depicting 'User Request' going through App Check to 'Backend'. A red line depicting 'Bad Request' is being blocked by App Check.
Firebase App Check ensures that only legitimate users access your backend resources

To get started, add Firebase to your Android project and enable the Play Integrity API for your app in the Google Play console. Back in the Firebase console, go to the App Check section of your Firebase project to register your app by providing its SHA-256 fingerprint.

Then, update your Android project’s Gradle dependencies with the App Check library for Android:

dependencies {
    // BoM for the Firebase platform
   implementation(platform("com.google.firebase:firebase-bom:33.7.0"))

    // Dependency for App Check
    implementation("com.google.firebase:firebase-appcheck-playintegrity")
}

Finally, in your Kotlin code, initialize App Check before using any other Firebase SDK:

Firebase.initialize(context)
Firebase.appCheck.installAppCheckProviderFactory(
    PlayIntegrityAppCheckProviderFactory.getInstance(),
)

To enhance the security of your generative AI feature, you should implement and enforce App Check before releasing your app to production. Additionally, if your app utilizes other Firebase services like Firebase Authentication, Firestore, or Cloud Functions, App Check provides an extra layer of protection for those resources as well.

Once App Check is enforced, you’ll be able to monitor your app’s requests in the Firebase console.

An area chart of the Apps Check metrics page in Firebase console, showing the percentages of verified and unverified requests over several days. Numerical breakdowns of verified (51%) and unverified requests (49%) are shown.
App Check metrics page in the Firebase console

You can learn more about App Check on Android in the Firebase documentation.

Use Remote Config for server-controlled configuration

The generative AI landscape evolves quickly. Every few months, new Gemini model iterations become available and some models are removed. See the Vertex AI in Firebase Gemini models page for details.

Because of this, instead of hardcoding the model name in your app, we recommend using a server-controlled variable using Firebase Remote Config. This allows you to dynamically update the model your app uses without having to deploy a new version of your app or require your users to pick up a new version.

You define parameters that you want to control (like model name) using the Firebase console. Then, you add these parameters into your app, along with default "fallback" values for each parameter. Back in the Firebase console, you can change the value of these parameters at any time. Your app will automatically fetch the new value.

Here's how to implement Remote Config in your app:

// Initialize the remote configuration by defining the refresh time
val remoteConfig: FirebaseRemoteConfig = Firebase.remoteConfig
val configSettings = remoteConfigSettings {
    minimumFetchIntervalInSeconds = 3600
}
remoteConfig.setConfigSettingsAsync(configSettings)

// Set default values defined in your app resources 
remoteConfig.setDefaultsAsync(R.xml.remote_config_defaults)

// Load the model name
val modelName = remoteConfig.getString("model_name")

Read more about using Remote Config with Vertex AI in Firebase.

Gather user feedback to evaluate impact

As you roll out your AI-enabled feature to production, it's critical to build feedback mechanisms into your product and allow users to easily signal whether the AI output was helpful, accurate, or relevant. For example, you can incorporate interactive elements such as thumb-up and thumb-down buttons and detailed feedback forms within the user interface. The Material Icons in Compose package provides ready to use icons to help you implement it.

You can easily track the user interaction with these elements as custom analytics events by using Google Analytics logEvent() function:

Row {
   Button (
      onClick = {
         firebaseAnalytics.logEvent("model_response_feedback") {
            param("feedback", "thumb_up")
         }
      }
   ) {
      Icon(Icons.Default.ThumbUp, contentDescription = "Thumb up")
   },
   Button (
      onClick = {
         firebaseAnalytics.logEvent("model_response_feedback") {
            param("feedback", "thumb_down")
         }
      }
   ) {
      Icon(Icons.Default.ThumbDown, contentDescription = "Thumb down")
   }
}

Learn more about Google Analytics and its event logging capabilities in the Firebase documentation.

User privacy and responsible AI

When you use Vertex AI in Firebase for inference, you have the guarantee that the data sent to Google won’t be used by Google to train AI models (see Vertex AI documentation for details).

It's also important to be transparent with your users when they're engaging with generative AI technology. You should highlight the possibility of unexpected model behavior.

Finally, users should have control within your app over how their activity related to AI model interactions is stored and deleted.

You can learn more about how Google is approaching Generative AI responsibly in the Google Cloud documentation.

Helping users find trusted apps on Google Play

Posted by JJ Zou – Product Manager, and Scott Lin – Product Manager

At Google Play, we're committed to empowering you with the tools and resources you need to build successful and secure apps that users can rely on. That's why we're introducing a new way to recognize VPN apps that go above and beyond to protect their users: a "Verified" badge for consumer-facing VPN apps.

This new badge is designed to highlight apps that prioritize user privacy and safety, help users make more informed choices about the VPN apps they use, and build confidence in the apps they ultimately download. This badge complements existing features such as the Google Play Store banner for VPNs and Data Safety section declaration in the Play Store.

A screenshot of the NordVPN app page on the Google Play Store. The app has a 4.6-star rating and is verified by Google Play Protect and description mentions 6,000+ servers in 110+ locations and highlights its data safety features.

Build user trust with more transparency

Earning the VPN badge isn't just about checking a box— it's proof that your VPN app invests in app safety. This badge signifies that your app has gone above and beyond, adhering to the Play safety and security guidelines and successfully completed a Mobile Application Security Assessment (MASA) Level 2 validation.

The VPN badge helps your app stand out in a crowded marketplace. Once awarded, the badge is prominently displayed on your app’s details page and in search results. Additionally, we have built new surfaces to showcase verified VPN applications.

Demonstrating commitment to security and safety

We're excited to share insights from some of our partners who have already earned the VPN badge and are leading the way in building a safe and trusted Google Play ecosystem. Learn how partners like Nord, hide.me, and Aloha are using the badge and implementing best practices for user security:

Nord

Nord VPN Logo

“We’re excited that the new ‘Verified’ badge will help users easily identify VPNs that meet high standards for security and privacy. In a market where trust is key, this badge not only provides reassurance to customers, but also highlights the integrity of developers committed to delivering secure and reliable products.”


hide.me

hide.me Logo

“Privacy and user safety are fundamental to our VPN's architecture. The MASA program has been valuable in validating our security practices and maintaining high standards. This accreditation provides independent verification of our commitment to protecting user privacy.”


Aloha Browser

Aloha Logo

“The certification process is well-organized and accessible to any company. If your product is developed with security as a core focus, passing the required audits should not pose any difficulty. We regularly conduct third-party audits and have been active participants in the MASA program since its inception. Additionally, it fosters discipline in your development practices, knowing that regular re-certification is required. Ultimately, it’s the end user who benefits the most—a secure and satisfied user is the ultimate goal for every app developer.”


Getting your App Badge-Ready

To take advantage of this opportunity to enhance your app's profile and attract more users, learn more about the specific criteria and start the validation process today.

To be considered for the "Verified" badge, your VPN app needs to:

    • Have at least 10,000 installs and 250 reviews
    • Be published on Google Play for at least 90 days
        • Independent security review, under ‘Additional badges’
        • Encryption in transit
    Note: This list is not exhaustive and doesn't fully represent all the criteria used to display the badge. While other factors contribute to the evaluation, fulfilling these requirements significantly increases your chances of seeing your VPN app “Verified.”

    Join us in our mission to create a safer and more transparent Google Play ecosystem. We're here to support you with the tools and resources you need to build trusted apps.

Android Studio’s 10 year anniversary

Posted by Tor Norbye – Engineering Director, Jamal Eason – Director of Product Management, and Xavier Ducrohet – Tech Lead | Android Studio

Android Studio provides you an integrated development environment (IDE) to develop, test, debug, and package Android apps that can reach billions of users across a diverse set of Android devices. Last month we reached a big milestone for the product: 10 years since the Android Studio 1.0 release reached the stable channel. You can hear a bit more about its history in the most recent episode of Android Developers Backstage, or watch some of the team’s favorite moments: 🎉

When we set out to develop Android Studio we started with these three principles:

First, we wanted to build and release a complete IDE, not just a plugin. Before Android Studio, users had to go download a JDK, then download Eclipse, then configure it with an update center to point to Android, install the Eclipse plugin for Android, and then configure that plugin to point to an Android SDK install. Not only did we want everything to work out-of-the-box, but we also wanted to be able to configure and improve everything: from having an integrated dependency management system to offering code inspections that were relevant to Android app developers to having a single place to report bugs.

Second, we wanted to build it on top of an actively maintained, open-sourced, and best-of-breed Java programing language IDE. Not too long before releasing Android Studio, we had all used IntelliJ and felt it was superior from a code editing perspective.

And third, we wanted to not only provide a build system that was better suited for Android app development, but to also enable this build system to work consistently from both from the command line and from inside the IDE. This was important because in the previous tool chain, we found that there were discrepancies in behavior and capability between the in-IDE builds with Eclipse, and CI builds with Ant.

This led to the release of Android Studio, including these highlights:

Here are some nostalgic screenshots from that first version of Android Studio:

The Setup Wizard welcome screen displays icons of a tablet, a watch, glasses, a TV, and a car, indicating the variety of devices supported in Android Studio
First-run setup wizard of Android Studio

Android Studio is open with Java code visible in the main window and project files listed in the left sidebar.  A documentation window is open, displaying translation strings for a schedule view.
Editing code within Android Studio

A screenshot of Android Studio shows XML code on the left and previews of a messaging app layout on different Android devices on the right.
Editing and previewing layouts across different screen sizes

Android Studio has come a long way since those early days, but our mission of empowering Android developers with excellent tools continues to be our focus.

Let’s hear from some team members across Android, JetBrains, and Gradle as they reflect on this milestone and how far the ecosystem has come since then.

Android Studio team

“Inside the Android team, engineers who didn't work on apps had the choice between using Eclipse and using IntelliJ, and most of them chose IntelliJ. We knew that it was the gold standard for Java development (and still is, all these years later.) So we asked ourselves: if this is what developers prefer when given a choice, wouldn't this be for our users as well? 

And the warm reception when we unveiled the alpha at I/O in 2013 made it clear that it was the right choice.” 

 - Tor Norbye, Engineering Director of Android Studio at Google

“We had a vision of creating a truly Integrated Development Environment for Android app development instead of a collection of related tools. In our previous working model, we had contributions of Android tools from a range of frameworks and UX flows that did not 100% work well end-to-end. The move to the open-sourced JetBrains IntelliJ platform enabled the Google team to tie tools together in a thoughtful way with Android Studio, plus it allowed others to contribute in a more seamless way. Lastly, looking back at the last 10 years, I’m proud of the partnership with Jetbrains and Gradle, plus the community of contributors to bring the best suite of tools to Android app developers.” 

 – Jamal Eason, Director of Product Management of Android Studio at Google

JetBrains

“Google choosing IntelliJ as the platform to build Android Studio was a very exciting moment for us at JetBrains. It allowed us to strengthen and build on the platform even further, and paved the way for further collaboration in other projects such as Kotlin.” 

 – Hadi Hariri, VP of Program Management at JetBrains

Gradle

“Android Studio's 10th anniversary marks a decade of incredible progress for Android developers. We are proud that Gradle Build Tool has continued to be a foundational part of the Android toolchain, enabling millions of Android developers to build their apps faster, more elegantly, and at scale.”

 – Hans Dockter, creator of Gradle Build Tool and CEO/Founder of Gradle Inc.

“Our long-standing strategic partnership with Google and our mutual commitment to improving the developer experience continues to impact millions of developers. We look forward to continuing that journey for many years to come.” 

 – Piotr Jagielski, VP of Engineering, Gradle Build Tool


Last but not least, we want to thank you for your feedback and support over the last decade. Android Studio wouldn’t be where it is today without the active community of developers who are using it to build Android apps for their communities and the world and providing input on how we can make it better each day.

As we head into this new year, we’ll be bringing Gemini into more aspects of Android Studio to help you across the development lifecycle to build quality apps faster. We’ll strive to make it easier and more seamless to build, test, and deploy your apps with Jetpack Compose across the range of form factors. We are proud of what we launch, but we always have room to improve in the evolving mobile ecosystem. Therefore, quality and stability of the IDE is our top priority so that you can be as productive as possible.

We look forward to continuing to empower you with great tools and improvements as we take Android Studio forward into the next decade. 🚀 We also welcome you to be a part of our developer community on LinkedIn, Medium, YouTube, or X.

The First Beta of Android 16

Posted by Matthew McCullough – VP of Product Management, Android Developer

The first beta of Android 16 is now available, which means it's time to open the experience up to both developers and early adopters. You can now enroll any supported Pixel device here to get this and future Android Beta updates over-the-air.

This build includes support for the future of app adaptivity, Live Updates, the Advanced Professional Video format, and more. We’re looking forward to hearing what you think, and thank you in advance for your continued help in making Android a platform that works for everyone.

Android adaptive apps

Users expect apps to work seamlessly on all their devices, regardless of display size and form factor. To that end, Android 16 is phasing out the ability for apps to restrict screen orientation and resizability on large screens. This is similar to features OEMs have added over the last several years to large screen devices to allow users to run apps at any window size and aspect ratio.

On screens larger than 600dp wide, apps that target API level 36 will have app windows that resize; you should check your apps to ensure your existing UIs scale seamlessly, working well across portrait and landscape aspect ratios. We're providing frameworks, tooling, and libraries to help.

Two hands hold a folding phone, showing the Developer News feed in both the folded and unfolded states. The unfolded view shows more news items.

Key changes:

Timeline:

Live Updates

Live Updates are a new class of notifications that help users monitor and quickly access important ongoing activities.

The new ProgressStyle notification template provides a consistent user experience for Live Updates, helping you build for these progress-centric user journeys: rideshare, delivery, and navigation. It includes support for custom icons for the start, end, and current progress tracking, segments and points, user journey states, milestones, and more.

ProgressStyle notifications are suggested only for ride sharing, food delivery, and navigation use cases.

@Override
protected Notification getNotification() {
   return new Notification.Builder(mContext, CHANNEL_ID)
      .setSmallIcon(R.drawable.ic_app_icon)
      .setContentTitle("Ride requested")
      .setContentText("Looking for nearby drivers")
      .setStyle(
          new Notification.ProgressStyle()
          .addProgressSegment(
              new Notification.ProgressStyle.Segment(100)
                  .setColor(COLOR_ORANGE)
           ).setProgressIndeterminate(true)
      ).build();
}

Camera and media updates

Android 16 advances support for the playback, creation, and editing of high-quality media, a critical use case for social and productivity apps.

Advanced Professional Video

Android 16 introduces support for the Advanced Professional Video (APV) codec which is designed to be used for professional level high quality video recording and post production.

The APV codec standard has the following features:

    • Perceptually lossless video quality (close to raw video quality)
    • Low complexity and high throughput intra-frame-only coding (without pixel domain prediction) to better support editing workflows
    • Support for high bit-rate range up to a few Gbps for 2K, 4K and 8K resolution content, enabled by a lightweight entropy coding scheme
    • Frame tiling for immersive content and for enabling parallel encoding and decoding
    • Support for various chroma sampling formats and bit-depths
    • Support for multiple decoding and re-encoding without severe visual quality degradation
    • Support multi-view video and auxiliary video like depth, alpha, and preview
    • Support for HDR10/10+ and user-defined metadata

A reference implementation of APV is provided through the OpenAPV project. Android 16 will implement support for the APV 422-10 Profile that provides YUV 422 color sampling along with 10-bit encoding and for target bitrates of up to 2Gbps.

Camera night mode scene detection

To help your app know when to switch to and from a night mode camera session, Android 16 adds EXTENSION_NIGHT_MODE_INDICATOR. If supported, it's available in the CaptureResult within Camera2.

This is the API we briefly mentioned as coming soon in the "How Instagram enabled users to take stunning low light photos" blogpost. That post is a practical guide on how to implement night mode together with a case study that links higher-quality, in-app, night mode photos with an increase in the number of photos shared from the in-app camera.

Vertical Text

Android 16 adds low-level support for rendering and measuring text vertically to provide foundational vertical writing support for library developers. This is particularly useful for languages like Japanese that commonly use vertical writing systems. A new flag, VERTICAL_TEXT_FLAG, has been added to the Paint class. When this flag is set using Paint.setFlags, Paint’s text measurement APIs will report vertical advances instead of horizontal advances, and Canvas will draw text vertically.

Note: Current high level text APIs, such as Text in Jetpack Compose, TextView, Layout classes and their subclasses do not support vertical writing systems, and do not support using the VERTICAL_TEXT_FLAG.
val text = "「春は、曙。」"
Box(Modifier
  .padding(innerPadding)
  .background(Color.White)
  .fillMaxSize()
  .drawWithContent {
     drawIntoCanvas { canvas ->
       val paint = Paint().apply {
         textSize = 64.sp.toPx()
       }
       // Draw text vertically
       paint.flags = paint.flags or VERTICAL_TEXT_FLAG
       val height = paint.measureText(text)
       canvas.nativeCanvas.drawText(
         text, 0, text.length, size.width / 2, (size.height - height) / 2, paint
       )
     }
  }) 
{}

Accessibility

Android 16 adds new accessibility APIs to help you bring your app to every user.

Supplemental descriptions

When an accessibility service describes a ViewGroup, it combines content labels from its child views. If you provide a contentDescription for the ViewGroup, accessibility services assume you are also overriding the content of non-focusable child views. This can be problematic if you want to label things like a drop down (e.g. "Font Family") while preserving the current selection for accessibility (e.g. "Roboto"). Android 16 adds setSupplementalDescription so you can provide text that provides information about a ViewGroup without overriding information from its children.

Required form fields

Android 16 adds setFieldRequired to AccessibilityNodeInfo so apps can tell an accessibility service that input to a form field is required. This is an important scenario for users filling out many types of forms, even things as simple as a required terms and conditions checkbox, helping users to consistently identify and quickly navigate between required fields.

Generic ranging APIs

Android 16 includes the new RangingManager, which provides ways to determine the distance and angle on supported hardware between the local device and a remote device. RangingManager supports the usage of a variety of ranging technologies such as BLE channel sounding, BLE RSSI-based ranging, Ultra-Wideband, and WiFi round trip time.

Behavior changes

With every Android release, we seek to make the platform more efficient and robust, balancing the needs of your apps against things like system performance and battery life. This can result in behavior changes that impact compatibility.

ART internal changes

Code that leverages internal structures of the Android Runtime (ART) may not work correctly on devices running Android 16 along with earlier Android versions that update the ART module through Google Play system updates. These structures are changing in ways that help improve the Android Runtime's (ART's) performance.

Impacted apps will need to be updated. Relying on internal structures can always lead to compatibility problems, but it's particularly important to avoid relying on code (or libraries containing code) that leverages internal ART structures, since ART changes aren't tied to the platform version the device is running on; they go out to over a billion devices through Google Play system updates.

For more information, see the Android 16 changes affecting all apps and the restrictions on non-SDK interfaces.

Migration or opt-out required for predictive back

For apps targeting Android 16 or higher and running on an Android 16 or higher device, the predictive back system animations (back-to-home, cross-task, and cross-activity) are enabled by default. Additionally, the deprecated onBackPressed is not called and KeyEvent.KEYCODE_BACK is no longer dispatched.

If your app intercepts the back event and you haven't migrated to predictive back yet, update your app to use supported back navigation APIs or temporarily opt out by setting the android:enableOnBackInvokedCallback attribute to false in the <application> or <activity> tag of your app’s AndroidManifest.xml file.

Predictive back support for 3-button navigation

Android 16 brings predictive back support to 3-button navigation for apps that have properly migrated to predictive back. Long-pressing the back button initiates a predictive back animation, giving users a preview of where the back button takes them.

This behavior applies across all areas of the system that support predictive back animations, including the system animations (back-to-home, cross-task, and cross-activity).

Fixed rate work scheduling optimization

Prior to targeting Android 16, when scheduleAtFixedRate missed a task execution due to being outside a valid process lifecycle, all missed executions will immediately execute when app returns to a valid lifecycle.

When targeting Android 16, at most one missed execution of scheduleAtFixedRate will be immediately executed when the app returns to a valid lifecycle. This behavior change is expected to improve app performance. Please test the behavior to ensure your application is not impacted. You can also test by using the app compatibility framework and enabling the STPE_SKIP_MULTIPLE_MISSED_PERIODIC_TASKS compat flag.

Ordered broadcast priority scope no longer global

In Android 16, broadcast delivery order using the android:priority attribute or IntentFilter#setPriority() across different processes will not be guaranteed. Broadcast priorities for ordered broadcasts will only be respected within the same application process rather than across all system processes.

Additionally, broadcast priorities will be automatically confined to the range (SYSTEM_LOW_PRIORITY + 1, SYSTEM_HIGH_PRIORITY - 1).

Your application may be impacted if it does either of the following:

      1. Your application has declared multiple processes that have set broadcast receiver priorities for the same intent.

      2. Your application process interacts with other processes and has expectations around receiving a broadcast intent in a certain order.

If the processes need to coordinate with each other, they should communicate using other coordination channels.

Gemini Extensions

Samsung just launched new Gemini Extensions on the S25 series, demonstrating new ways Android apps can integrate with the power of Gemini. We're working to make this functionality available on even more form factors.

Two Android API releases in 2025

This preview is for the next major release of Android with a planned launch in Q2 of 2025 and we plan to have another release with new developer APIs in Q4. The Q2 major release will be the only release in 2025 to include planned behavior changes that could affect apps. The Q4 minor release will pick up feature updates, optimizations, and bug fixes; it will not include any app-impacting behavior changes.

2025 SDK release timeline showing a features only update in Q1 and Q3, a major SDK release with behavior changes, APIs, and features in Q2, and a minor SDK release with APIs and features in Q4

We'll continue to have quarterly Android releases. The Q1 and Q3 updates, which will land in-between the Q2 and Q4 API releases, will provide incremental updates to ensure continuous quality. We’re putting additional energy into working with our device partners to bring the Q2 release to as many devices as possible.

There’s no change to the target API level requirements and the associated dates for apps in Google Play; our plans are for one annual requirement each year, tied to the major API level.

How to get ready

In addition to performing compatibility testing on this next major release, make sure that you're compiling your apps against the new SDK, and use the compatibility framework to enable targetSdkVersion-gated behavior changes as they become available for early testing.

App compatibility

The Android 16 production timeline shows the release stages, highlighting 'Beta Releases' and 'Platform Stability' in blue and green, respectively, from December to the final release.

The Android 16 Preview program runs from November 2024 until the final public release in Q2 of 2025. At key development milestones, we'll deliver updates for your development and testing environments. Each update includes SDK tools, system images, emulators, API reference, and API diffs. We'll highlight critical APIs as they are ready to test in the preview program in blogs and on the Android 16 developer website.

We’re targeting March of 2025 for our Platform Stability milestone. At this milestone, we’ll deliver final SDK/NDK APIs and also final internal APIs and app-facing system behaviors. From that time you’ll have several months before the final release to complete your testing. The release timeline details are here.

Get started with Android 16

Now that we've entered the beta phase, you can enroll any supported Pixel device to get this and future Android Beta updates over-the-air. If you don’t have a Pixel device, you can use the 64-bit system images with the Android Emulator in Android Studio.

If you are currently on Android 16 Developer Preview 2 or are already in the Android Beta program, you will be offered an over-the-air update to Beta 1.

If you are in Android 25Q1 Beta and would like to take the final stable release of 25Q1 and exit Beta, you need to ignore the over-the-air update to 25Q2 Beta 1 and wait for the release of 25Q1.

We're looking for your feedback so please report issues and submit feature requests on the feedback page. The earlier we get your feedback, the more we can include in our work on the final release.

For the best development experience with Android 16, we recommend that you use the latest preview of Android Studio (Meerkat). Once you’re set up, here are some of the things you should do:

    • Compile against the new SDK, test in CI environments, and report any issues in our tracker on the feedback page.
    • Test your current app for compatibility, learn whether your app is affected by changes in Android 16, and install your app onto a device or emulator running Android 16 and extensively test it.

We’ll update the preview/beta system images and SDK regularly throughout the Android 16 release cycle. Once you’ve installed a beta build, you’ll automatically get future updates over-the-air for all later previews and Betas.

For complete information, visit the Android 16 developer site.

The future is adaptive: Changes to orientation and resizability APIs in Android 16

Posted by Maru Ahues Bouza – Director, Product Management

With 3+ billion Android devices in use globally, the Android ecosystem is more vibrant than ever. Android mobile apps run on a diverse range of devices, from phones and foldables to tablets, Chromebooks, cars, and most recently XR. Users buy into an entire device ecosystem and expect their apps to work across all devices. To thrive in this multi-device environment, your apps need to adapt seamlessly to different screen sizes and form factors.

Many Android apps rely on user interface approaches that work in a single orientation and/or restrict resizability. However, users want apps to make full use of their large screens, so Android device manufacturers added well-received features that override these app restrictions.

With this in mind, Android 16 is removing the ability for apps to restrict orientation and resizability at the platform level, and shifting to a consistent model of adaptive apps that seamlessly adjust to different screen sizes and orientations. This change will reduce fragmentation with behavior that better meets user expectations, and improves accessibility by respecting the user’s preferred orientation. We're building tools, libraries, and platform APIs to help you do this to provide a consistently excellent user experience across the entire Android ecosystem.

What's changing?

Starting with Android 16, we're phasing out manifest attributes and runtime APIs used to restrict an app's orientation and resizability, enabling better user experiences for many apps across devices.

These changes will initially apply when the app is running on a large screen, where “large screen” means that the smaller dimension of the display is greater than or equal to 600dp. This includes:

    • Inner displays of large screen foldables
    • Tablets, including desktop windowing
    • Desktop environments, including Chromebooks

The following manifest attributes and APIs will be ignored for apps targeting Android 16 (SDK 36) on large screens:

Manifest attributes/API Ignored values
screenOrientation portrait, reversePortrait, sensorPortrait, userPortrait, landscape, reverseLandscape, sensorLandscape, userLandscape
setRequestedOrientation() portrait, reversePortrait, sensorPortrait, userPortrait, landscape, reverseLandscape, sensorLandscape, userLandscape
resizeableActivity all
minAspectRatio all
maxAspectRatio all

There are some exceptions to these changes for controlling orientation, aspect ratio, and resizability:

    • As mentioned before, these changes won't apply for screens that are smaller than sw600dp (e.g. most phones, flippables, outer displays on large screen foldables)

Also, users have control. They can explicitly opt-in to using the app’s default behavior in the aspect ratio settings.

Two hands hold a folding phone, showing the Developer News feed in both the folded and unfolded states. The unfolded view shows more news items.
Apps, targeting API level 36, that were previously letterboxed on large screen devices will fill the display in landscape orientation on Android 16

Get ready for this change, by making your app adaptive

Apps will need to support landscape and portrait layouts for window sizes in the full range of aspect ratios that users can choose to use apps in, as there will no longer be a way to restrict the aspect ratio and orientation to portrait or to landscape.

To test if your app will be impacted by these changes, use the Android 16 Beta 1 developer preview with the Pixel Tablet and Pixel Fold series emulators in Android Studio, and either set targetSdkPreview = “Baklava” or use the app compatibility framework by enabling the UNIVERSAL_RESIZABLE_BY_DEFAULT flag.

For existing apps that restrict orientation and aspect ratio, these changes may result in problems like overlapping layouts. To solve these issues and meet user expectations, our vision is that apps are built to be adaptive, to provide an optimal experience whether someone is using the app on a phone, foldable, tablet, Chromebook, XR or in a car.

Resolving common problems

    • Avoid stretched UI components: If layouts were designed and built with the assumption of phone screens, then app functionality may break for other aspect ratios. For example, if a layout was built assuming a portrait aspect ratio, then UI elements that fill the max width of the window will appear stretched in landscape-oriented windows. If layouts aren’t built to scroll, then users may not be able to click on buttons or other UI elements that are offscreen, resulting in confusing or broken behavior. Add a maximum width to components to avoid stretching, and add scrolling to ensure all content is reachable.
    • Preserve state across when window size changes: Removing orientation and aspect ratio restrictions also means that the window sizes of apps will change more frequently in response to how the user prefers to use an app, such as by rotating, folding, or resizing an app in multi-window or free-form windowing modes. Orientation changes and resizing will result in Activity recreation by default. To ensure a good user experience, it is critical that app state is preserved through these configuration changes so that users don’t lose their place in the app when changing posture or changing windowing modes.

To account for different window sizes and aspect ratios, use window size classes to drive layout behavior in a way that doesn’t require device-specific customizations. Apps should also be built with the assumption that window sizes will frequently change. It’s not necessary to build duplicate orientation-specific layouts - instead, ensure your existing UIs can re-layout well no matter what the window size is. If you have a landscape- or portrait-specific layout, those layouts will still be used.

Optimizing for window sizes by building adaptive

If you're already building adaptive layouts and supporting all orientations, you're set up for success as your app will be prepared for each of the device types and windowing modes your users want to use your app in and these changes should have minimal impact.

We've also got a range of testing resources to help you guarantee reliability. You can automate testing with tools like the Espresso testing framework and Jetpack Compose testing APIs.

FlipaClip is a great example of why building for multiple form-factors matters: they saw 54% growth in tablet users in the four months after they optimized their app to be adaptive.

Timeline

We understand that the changes are significant for apps that have traditionally only supported portrait orientation. UI issues like buttons going off screen, overlapping content, or screens with camera viewfinders may need adjustments.

To help you plan ahead and make the necessary adjustments, here’s the planned timeline outlining when these changes will take effect:

    • Android 16 (2025): Changes described above will be the baseline experience for large screen devices (smallest screen width > 600dp) for apps that target API level 36, with the option for developers to opt-out.
    • Android release in 2026: Changes described above will be the baseline experience for large screen devices (smallest screen width >600dp) for apps that target API level 37. Developers will not have an option to opt-out.
Target API level Applicable devices Developer opt-out allowed
36 (Android 16) Large screen devices (smallest screen width >600dp) Yes
37 (Anticipated) Large screen devices (smallest screen width >600dp) No

The deadlines for targeting a specific API level are app store specific. For Google Play, the plan is that targeting API 36 will be required in August 2026 and targeting API 37 will be required in August 2027.

Preparing for Android 16

Refer to the Android 16 changes page for all changes impacting apps in Android 16, as well as additional resources for updating your apps if you are impacted. To test your app, download the Android 16 Beta 1 developer preview and update to targetSdkPreview = “Baklava” or use the app compatibility framework to enable specific changes.

We're committed to helping developers embrace this new era of adaptive apps and unlock the full potential of their apps across the diverse Android ecosystem. Check out the do’s and don’ts for designing and building across multiple window sizes and form factors, as well how to test across the variety of devices that your app will be used in.

Stay tuned for more updates and resources as we approach the release of Android 16!

Build kids app experiences for Wear OS

Posted by John Zoeller – Developer Relations Engineer, and Caroline Vander Wilt – Group Product Manager

New Wear OS features enable ‘standalone’ watches for kids, unlocking new possibilities for Wear OS app developers

In collaboration with Samsung, Wear OS is introducing Galaxy Watch for Kids, a new kids experience enabling kids to explore while staying connected with their families from their smartwatch, no phone necessary. This launch unlocks new opportunities for Wear OS developers to reach younger audiences.

Galaxy Watch for Kids is rolling out to Galaxy Watch7 LTE models , with features including:

    • No phone ownership required: This experience enables the watch and its associated apps to operate on a fully standalone basis using LTE, and when available, Wifi connectivity. This includes calling, texting, games, and more.
    • Selection of kid-friendly apps: From gaming to health, kids can browse and request installs of Teacher Approved apps and watch faces onGoogle Play. In addition to approving and blocking apps, parents can also monitor app usage from Google Family Link.
    • Stay in touch with parent-managed contacts: Parents can ensure safer communications by limiting text and calling to approved contacts.
    • Location sharing: Offers peace of mind with location sharing and geofencing notifications when kids leave or arrive at designated areas.
    • School time: Limits watch functionality during scheduled hours of the day, so kids can focus while in school or studying.

Building kids experiences with standalone functionality enables you to reach both standalone and tethered watches for kids. Apps like Math Tango have already created great Wear OS experiences for kids. Check out the video below to learn how they built a rich and engaging Wear OS app.

Our new kids-focused design and content principles and developer guidance are also available today. Check out some of the highlights in the next section.

New principles and guidelines for development

We've created new design principles and guidelines to help developers take advantage of this opportunity to build and improve apps and watch faces for kids.

Design principle: Active and fun

Build engaging healthy experiences for children by including activity-based features.

A great example of this is the Odd Squad Time Unit app from PBS KIDS that encourages children to get up and be physically active. By using the on-device sensors and power-efficient platform APIs, the app is able to provide a fun experience all day and still maintain battery life of the watch from wakeup to bed time.

A circular timer display with a hexagonal background 'JUMP!' and '5 SECONDS REMAIN'. A gold hand points to the number 5.  A colorful segmented ring surrounds the center of the timer.

Note that while experiences should be catered to kids, they must also follow the Wear OS quality requirements related to the visual experience of your app, especially when crafting touch targets and font sizes.

Content principle: Thoughtfully crafted

Consider adjusting your content to make it not only appropriate, but also consumable and intuitive for younger kids (including those as young as 6). This includes both audio and visual app components.

Tinkercast’s Two Whats?! And a Wow! app uses age-appropriate vocabulary and fun characters to aid in their teaching. It’s a great example of how a developer should account for reading comprehension.

A smartwatch face displays a cartoon bird with a speech bubble that says 'SWIPE TO VIEW YOUR OPTIONS!'. Yellow arrows point left and right with a large letter 'A' between them.

Development guidelines

New Wear OS kids apps must adhere to the Wear OS app quality guidelines, the guidelines for standalone apps, and the new Kids development guide.


Minimize impact on device battery

Minimize events that affect battery life over the course of one session. Kids use watches that provide important safety features for their parents or guardians, which depend on the device having enough battery life. Below are best practices for reducing battery impact.

      DO design for offline use cases so that kids can play without incurring network-related battery costs

      DO minimize tasks that require an internet or GPS connection

      🚫 DO NOT use direct sensor tracking as this will significantly reduce the battery life

      🚫 DO NOT include long-running animations


Choose a development environment

To develop kid-friendly apps and games you can use Compose for Wear OS, our recommended approach for building UI for Wear OS, as well as Unity for Android.

We recommend Unity for developing games on Wear OS if you’re familiar and comfortable with its workflows and capabilities. However, for games with only a few animations, Compose Animation should be sufficient and is better supported within the Android environment.

Be sure to consider that some Wear OS quality requirements may require custom Unity implementations, such as support for Rotary Input.

Originator’s MathTango showcases the flexibility and richness of developing with Unity:

A purple cartoon moose-like character with large antlers is displayed on a round smartwatch face. The name 'ISAAC' is shown below the character, along with the label 'NEW!'. A green arrow is visible in the top left corner of the screen.

Creating Watch Faces

Developing watch faces for kids requires the use of Watch Face Format. Watch faces should adhere to our content and design principles mentioned above, as well as our quality standards, including our ambient mode requirement.

The following examples demonstrate our Content Principle: Appealing. The content is relevant, engaging, and fun for kids, sparking their interest and imagination.

The Crayola Pets Watch Face comes with a great variety of customization options, and demonstrates an informative and pleasant watch face:

A circular watch face shows a cartoon character, the time (3:30), the date (Feb 10), and a battery indicator (89%).

The Marvel Watch Faces (Captain America shown) provide a fun and useful step tracking feature:

A round smartwatch face displays a cartoon Captain America, his shield, and the time (12:30). A step counter shows 650 steps. The Marvel logo is visible.

Kids experience publishing requirements

Developers looking to get started on a new kids experience will need to keep a few things in mind when publishing on the Play Store.

Expand your reach with Wear OS

Get ready to reach a new generation of Wear OS users! We've created all-new guidelines to help you build engaging experiences for kids. Here’s a quick recap:

With the Wear for Kids experience, developers can reach an entirely new audience of users and be part of the next generation of learning and enrichment on Wear OS.

Check out all of the new experiences on the Play Store!

Apps adopt Transformer to support more reliable and performant media editing use cases

Posted by Caren Chang – Developer Relations Engineer

The Jetpack Media3 library enables Android apps to build high quality media apps. As part of the Media3 library, the Transformer module aims to provide easy to use, reliable, and performant APIs for transcoding and editing media.

For example, apps can use Transformer to apply editing operations such as trimming a long piece of media file, or applying effects to video tracks. Transformer can also be used to convert media files from one format to another, such as adjusting the resolution or encoding of the media file.

Developing Transformer APIs

As part of the process to introduce new APIs, our engineering team works closely with Google apps such as Google Photos to test and experiment the new APIs. Experimental flags are first introduced to enable performance improvements. Once the results are successful and conclusive, these experimental features are then built into the default API implementations or promoted to public APIs for all apps to use. This approach allows Transformer APIs to be tested on a wide variety of devices.

Transformer Adoption in apps

Apps that have been using Transformer in production observed in-app performance improvements, less code to maintain, and better developer experience. Let’s take a closer look at how Transformer has helped apps for their media-editing use cases.

One of users’ favorite features in Google Photos is memory sharing, where snippets of your life story that are curated and presented as Google Photos memories can now be shared as videos to social media and chat apps. However, the process of combining media items to create a video on device is resource intensive and subject to significant latency, especially on low-end devices. To reduce this latency and enable the feature on a wider range of devices, Photos adopted Transformer in their media creation pipeline. Along with other improvements made, the team found that Transformer played a part in reducing the median user latency for creating memory videos by 41% on high-end devices and 27% on mid-range devices.

The Photos app also enables users to perform media edits such as trimming or rotating a video. By adopting Transformer APIs for rotating videos, median save latency was reduced by 79% for applicable videos. The app also adopted Transformer’s API for optimizing video trimming, and observed video save latency decrease by 64%.

1 Second Everyday is a personal video journal that helps you create captivating montages and timelapses. One of the app’s main user journeys is sequentially combining short videos to create a meaningful movie. After adopting Transformer for this use case, the app observed that video encoding performance was up to 5x faster, allowing them to explore enabling 4k and HDR support. The Transformer adoption also helped decrease relevant code by 30%, making it easier for the developers to maintain the code base.

BandLab is the next-generation music creation platform used by millions around the world to make and share their music. The app originally used MediaCodecs for their video creation use cases, but found that the low level implementation resulted in native crashes that were difficult to debug. After researching more on Transformer, the team made the decision to migrate from MediaCodecs to Transformer. Overall, it only took the team 12 working days for the migration, and this resulted in a simpler codebase and more maintainable pipeline for their media creation use cases. In addition, the app observed that all previously observed native crashes were no longer occurring anymore.

What’s next for Transformers?

We’re excited to see Transformer’s adoption in the developer community, and will continue adding new features to support more media-editing use cases for the Android ecosystem including:

    • Better support for previewing media edits
    • Improving the performance and developer experience for video frame extraction
    • Easier integration with AI effects
    • and much more

Keep an eye out on what we’re working on in the Media3 Github, and file feature requests to help shape the future of Transformer!

Android Studio Ladybug Feature Drop is Stable!

Posted by Steven Jenkins – Product Manager, Android Studio

Today, we are thrilled to announce the stable release of Android Studio Ladybug 🐞 Feature Drop (2024.2.2)!

Accelerate your productivity with Gemini in Android Studio, Animation Preview support for Wear Tiles, App Links Assistant and much more. All of these new features are designed to help you build high-quality Android apps faster.

Read on to learn more about all the updates, quality improvements, and new features across your key workflows in Android Studio Ladybug Feature Drop, and download the latest stable version today to try them out!

Android Studio Ladybug Feature Drop

Gemini in Android Studio

Gemini Code Transforms

Gemini Code Transforms can help you modify, optimize, or add code to your app with AI assistance. Simply right-click in your code editor and select "Gemini > Generate code" or highlight code and select "Gemini > Transform selected code." You can also use the keyboard shortcut Ctrl+\ (⌘+\ on macOS) to bring up the Gemini prompt. Describe the changes you want to make to your code, and Gemini will suggest a code diff, allowing you to easily review and accept only the suggestions you want.

With Gemini Code Transforms, you can simplify complex code, perform specific code transformations, or even generate new functions. You can also refine the suggested code to iterate on the code suggestions with Gemini. It's an AI coding assistant right in your editor, helping you write better code more efficiently.

Android Studio displays a code editor window open to Gemini Code Transform
Gemini Code Transform

Rename

Gemini in Android Studio enhances your workflow with intelligent assistance for common tasks. When renaming a single variable, class, or method from the code editor, the "Refactor > Rename" action uses Gemini to suggest contextually appropriate names, making it smoother and more efficient to refactor names as you’re coding in the editor.

A code editor window open to Gemini renaming a variable in Android Studio
Rename

Rethink

For larger renaming refactors, Gemini can "Rethink variable names" across your whole file. This feature analyzes your code and suggests more intuitive and descriptive names for variables and methods, improving readability and maintainability.

A code editor window open to Gemini analyzing code and suggesting more descriptive names for variables in Android Studio
Rethink

Commit Message

Gemini now assists with commit messages. When committing changes to version control, it analyzes your code modifications and suggests a detailed commit message.

A code editor window open to Gemini analyzing code and suggesting a detailed commit message in Android Studio
Commit Message

Generate Documentation

Gemini in Android Studio makes documenting your code easier than ever. To generate clear and concise documentation, select a code snippet, right-click in the editor and choose "Gemini > Document Function" (or "Document Class" or "Document Property", depending on the context). Gemini will generate a draft that you can then refine and perfect before accepting the changes. This streamlined process helps you create informative documentation quickly and efficiently.

A code editor window open to Gemini adding documentation to a code snippet in Android Studio
Generate Documentation

Debug

Animation Preview support for Wear OS Tiles

Animation Preview support for Wear OS Tiles helps you visualize and debug tile animations with ease. It provides a real-time view of your animations, allowing you to preview them, control playback with options like play, pause, and speed adjustment, and inspect key properties such as initial/end states and animation curves. You can even dynamically modify animation code and instantly observe the results within the inspector, streamlining the debugging and refinement process.

A code editor window open to animation preview support in Android Studio
Animation Preview support for Wear OS Tiles

Wear Health Services

The Wear Health Services feature in Android Studio simplifies the process of testing health and fitness apps by enabling Wear Health Services within the emulator. You can now easily customize various parameters for a given exercise such as heart rate, distance, and speed without needing a physical device or performing the activity itself. This streamlines the development and testing workflow, allowing for faster iteration and more efficient debugging of health-related features.

A code editor window open to Wear Health Services in Android Studio Emulator
Wear Health Services

Optimize

App Links Assistant

App Links Assistant simplifies the process of implementing app links by serving valid JSON syntax that resolves broken deep links for your app. You can review the JSON file and then upload it to your website, resolving issues quickly. This eliminates the manual creation of the JSON file, saving you time and effort. The tool also allows you to compare existing JSON files with newly generated ones to easily identify any discrepancies.

A code editor window open to App Links Assistant in Android Studio
App Links Assistant

Google Play SDK Insights Integration

Android Studio now provides enhanced lint warnings for public SDKs from the Google Play SDK Index and the Google Play SDK Console, helping you identify and address potential issues. These warnings alert you if an SDK is outdated, violates Google Play policies, or has known security vulnerabilities. Furthermore, Android Studio provides helpful quick fixes and recommended version ranges whenever possible, making it easier to update your dependencies and keeping your app more secure and compliant.

Android Studio displays a code editor window open to a Gradle build file. The IDE warns that an outdated Firebase authentication library is being used, preventing release to Google Play Console.
Google Play SDK Insights Integration

Quality improvements

Beyond new features, we also continued to improve the overall quality and stability of Android Studio. In fact, the Android Studio team addressed over 770 bugs during the Ladybug Feature Drop development cycle.

IntelliJ platform update

Android Studio Ladybug Feature Drop (2024.2.2) includes the IntelliJ 2024.2 platform release, which has many new features such as more intuitive full line code completion suggestions, a preview in the Search Everywhere dialog and improved log management for the Java** and Kotlin programming languages.

See the full IntelliJ 2024.2 release notes.

Summary

To recap, Android Studio Ladybug Feature Drop includes the following enhancements and features:

Gemini in Android Studio

    • Gemini Code Transforms
    • Rename
    • Rethink
    • Commit Message
    • Generate Documentation

Debug

    • Animation Preview support for Wear OS Tiles
    • Wear Health Services

Optimize

    • App Links Assistant
    • Google Play SDK Insights Integration

Quality Improvements

    • 770+ bugs addressed

IntelliJ Platform Update

    • More intuitive full line code completion suggestions
    • Preview in the Search Everywhere dialog
    • Improved log management for Java and Kotlin programming languages

Getting Started

Ready for next-level Android development? Download Android Studio Ladybug Feature Drop and unlock these cutting-edge features today. As always, your feedback is important to us – check known issues, report bugs, suggest improvements, and be part of our vibrant community on LinkedIn, Medium, YouTube, or X. Let's build the future of Android apps together!


**Java is a trademark or registered trademark of Oracle and/or its affiliates.