
We’re at MWC showcasing the latest AI features on Android.

Version 24.0.0 of the Android Google Mobile Ads SDK is now available. To take advantage of the latest features and performance improvements, we highly recommend you configure your app to upgrade as soon as possible. Major changes to the SDK are as follows:
Starting with version 24.0.0, the Google Mobile Ads SDK requires all apps to be on a minimum Android API level of 23
. To adjust the API level, change the value of minSdk
in your app-level build.gradle
file to 23
or higher.
By default, The OPTIMIZE_INITIALIZATION
and OPTIMIZE_AD_LOADING
flags are now generally available and set to true
. These flags help reduce ANRs. You can further prevent ANRs by initializing the Google Mobile Ads SDK on a background thread. For more information, see Optimize initialization and ad loading.
To prevent merge conflicts for apps that configure API-specific Ad Services, we've removed the android.adservices.AD_SERVICES_CONFIG
property tag from the SDK’s manifest file. This change provides greater flexibility for developers who need to customize their Ad Services configurations.
With this Android major version 24 launch and the iOS major version 12 launch last month, new deprecation and sunset dates for older releases are as follows:
For a complete list of changes in v24.0.0 and detailed migration steps, check the release notes and SDK migration guide. If you have any questions or need additional help, contact us through Mobile Ads SDK Support.
Welcome to "Meet the Android Studio Team"! In this blog series, we introduce you to the passionate people who create the Android development tools you use every day. Get to know the engineers, designers, product managers, and more who work hard to craft the best possible experience for Android developers, and explore their unique perspectives.
Meet Dan Dole, a UX Manager for Android Developer UX, who offers a unique perspective on the Android development journey. He highlights the passion and talent within the Android Developer team, emphasizing the importance of elegant solutions and efficient experiences for developers.
Dan also delves into the exciting potential of AI and machine learning to transform Android development, foreseeing a future where AI accelerates learning, refines code, and empowers developers to focus on innovation.
Through his insights, Dan underscores the collaborative spirit and unwavering commitment to developer success that defines the Android Developer Experience.
My journey with Android Development and the Android Studio team started with a conversation with a former colleague and the product lead for Android Developer. She was a leader I respected as someone who was passionate about developers, and believed that UX was a critical component of product development. After meeting with her and understanding the direction of Android, I was convinced that Android could be not just an outstanding mobile platform but a platform that spanned devices, and this was an organization that was focused on enabling developers to bring their talents and creativity to billions of users. Each year, I see us advancing in that direction and feel more confident in my choice to be part of the Android Developer team.
This question can’t be answered without mentioning that the people working on Android Developer tools and APIs are some of the most passionate and talented people I have ever worked with.
I am a UX professional in a highly technical environment. This has been the case for about two decades. One of the challenges I have faced is articulating the value of elegant solutions for developers.
This is partially because developers are very capable and resourceful. Clearly, they are tolerant and they will overcome issues that average users won’t. Prior to joining Android Developer Experience, I would have to create processes and negotiate quality bars to drive quality and build efficient experiences.
This challenge gave me skill in release management and how to understand some complexities unique to this space, but it also gave me tools to help explain that developers may be able to manage complexity better than most. Developers appreciate refinement, productivity, and quality, as much as they appreciate flexibility and capability.
We are in the very early stages of AI and its ability to impact developers. As we learn how to be transparent and give developers control over how an AI can benefit them, we are seeing an immediate impact on accelerating learning and refining code.
I expect AI to remove the “chores” that developers have to do, creating more space for them to be productive. I also expect AI to evolve from generating artifacts to generating actions. Making AI features more proactive and allowing developers to more quickly adjust to users' needs.
I lead our Android Developer research and design team. We spent countless hours listening to developers, evaluating feedback, and understanding technology investments. We approach these conversations and instruments by evaluating what we have already delivered, looking and listening to the challenges developers face, and designing and evaluating new approaches.
The Android Developer team (ENG, Product, UX and Test) are motivated by supporting developers, so all developer feedback is received with gratitude and influences all our investments.
Android is a vibrant and welcoming community, so my advice would be to engage the community. It is where we learn, inspire and grow together. I have heard many Android developers talk about the pride they have working on this platform and the conviction they have in it being the best platform to work on. I feel like this is unique to Android, the platform isn’t a means to an end, it’s an identity and value system. Android is a community of amazing people, get involved.
Embrace Dan's vision for the future of Android development and explore the latest AI advancements in Android Studio. Features like AI-powered code generation and refactoring tools empower you to develop higher-quality apps with greater efficiency.
Want to meet more of the Android Studio team? Stay tuned for future installments of this series, where we'll introduce you to new faces and share their unique insights.
Find Dan Dole on LinkedIn.
Welcome to "Meet the Android Studio Team," our new ongoing blog series. Each week, we'll introduce you to the talented people behind Android Studio. Get to know the engineers, designers, product managers, and more who create the best possible experience for Android developers like you. Join us and explore their unique perspectives.
Meet Tor Norbye, an Engineering Director at Google leading the development of Android Studio.
From his early days of coding to leading the charge on AI-powered development tools, Tor shares his insights on the evolution of Android and the vital role Android Studio plays in its future.
We'll delve into the challenges of creating developer tools, the importance of community feedback, and how Google strives to empower developers worldwide.
I grew up in Norway and I was fascinated by programming; my first exposure was as a middle schooler reading program listings in magazines (yes, in the early 80s, monthly computer magazines would include source code!) and in 1983 I got my hands on a microcomputer, and knew immediately that's what I wanted to do as a career. And now, 40+ years later, I still love programming. It's not my day-job anymore, but I still write bits and pieces of code for Android Studio on the shuttle and during quiet periods.
I've worked on developer tools my whole career - first, 14 years at Sun Microsystems after college. In 2010 I got increasingly interested in the rise of mobile computing and really wanted to be part of it, so I joined the Android team, and I've been here since.
Back then there was no "Android Studio". At the time we were working on Eclipse-based tooling for Android development. But we all knew that IntelliJ was the gold-standard for Java development, so a couple years later we began the work on building Android Studio on top of IntelliJ and with various new and ported code from our Eclipse plugins. I then had the honor of doing the unveiling demo at Google I/O in 2013.
The integration of artificial intelligence has absolutely impacted Android developer capabilities, and this is just the beginning.
I felt very fortunate to be part of bringing about the massive shift from desktop computing to mobile computing when I joined Android, and I can't believe I get to be in the middle of a second massive industry shift as well, with AI and large language models.
I actually spend a lot of my time on this, working with Studio engineers, UX and product managers on our various AI related features, and talking to partner AI teams at Google. We've made a huge amount of progress in the last couple of years, both on the Studio feature integration side, as well as Google-wide on the AI side. While there is some skepticism that we're just doing AI features for AI's sake, I don't see it that way. With AI, we can suddenly, with relatively low effort, build useful features not previously possible.
Here's a very simple example from the latest Studio version: When you invoke the Rename refactoring feature, we use Gemini to add additional naming suggestions into the name popup based on what your code is doing. Here we're helping you pick good names – and naming is famously one of the two hardest problems in computer science – naming, cache invalidation and off-by-one errors. Yet LLMs are good at this – so coupled with the safe refactoring machinery in the IDE, we were able to safely add a useful feature with relatively low engineering cost on the IDE side (of course, this is building on top of a massive investment from Google over on the Gemini side).
The field is moving incredibly quickly, so it's hard to predict where things are going, but we're actively working in several areas, making the AI more aware of your codebase, and making it handle larger, complex tasks via AI Agents, and so much more.
Earlier in my career, at a different company, we had big annual releases. I took a lot of pride in my productivity, and as my responsibilities grew, I'd try to do the impossible and deliver, no matter what. I'd not only work long hours, but I'd also try to work as quickly as I can. This led to a lot of stress. I remember putting my (at the time) young children to bed and impatiently waiting for them to fall asleep such that I could head back out to the garage office and start the evening coding shift. And I knew that stress isn't healthy, so I'd also stress about being stressed! This obviously wasn't sustainable.
Now, I emphasize work life balance not only for myself, but also for our team. I want to make sure our work is sustainable, and that people can thrive and be in it for the long term. It's a marathon, not a sprint.
We have a number of feedback channels; the most important one is the Android Studio issue tracker.
We still have a very large backlog of bugs, so it's easy to get the impression that we're ignoring user reports, but that's not true. As a team, we've actually fixed several thousand bugs in 2024 alone. The best bugs are those that are clear and actionable, ideally with steps to reproduce.
I'm also very thankful to everyone who turns on data sharing in Studio; if you don't already, please consider it! Our analytics is more of an indirect, but still vital, feedback channel from the community. In addition to collecting information on, for example, which menu items are clicked, we also use it to collect quality metrics on system health. For instance, when we detect that the UI is lagging (such as a 1+ second freeze in the UI thread), we grab a thread dump and send it to the server, then aggregate these into a dashboard where we can see top freeze spots in the IDE across the user population, and can focus our efforts on fixing those.
In Android Studio we're always making sure we support the latest technologies and recommendations from Android, Firebase, Material, and other Google technologies. That way, it's easier for developers to adopt recommendations, like using Kotlin, Coroutines, Compose, Material, and so on.
Don't miss our next and final installment in the "Meet the Android Studio Team" series; we'll feature one more talented team member and share their unique perspective. Stay tuned to learn more about the amazing people behind Android Studio.
Find Tor Norbye on Bluesky.
We are updating the location of some Google Meet meeting controls on Android and iOS devices, which are intended to organize features in a more intuitive way. This will help you to navigate the Meet layout faster and more intuitively. These are strictly design updates with no changes in functionality.
1. Emoji reactions are moving from the triple-dot overflow menu to the bottom bar. Simply tap the Emoji toggle to access or hide the reaction picker.Today we're releasing the second beta of Android 16, continuing our work to build a platform that enables creative expression. You can enroll any supported Pixel device to get this and future Android Beta updates over-the-air.
This build adds new support for professional camera experiences, graphical effects, extends our performance framework, and continues the evolution of features related to privacy, security, and background tasks. We’re looking forward to hearing what you think, and thank you in advance for your continued help in making Android a platform that works for everyone.
Android 16 enhances support for professional camera users, allowing for hybrid auto exposure along with precise color temperature and tint adjustments. It's easier than ever to capture motion photos with new Intent actions, and we're continuing to improve UltraHDR images, with support for HEIC encoding and new parameters from the ISO 21496-1 draft standard.
Android 16 adds new hybrid auto-exposure modes to Camera2, allowing you to manually control specific aspects of exposure while letting the auto-exposure (AE) algorithm handle the rest. You can control ISO + AE, and exposure time + AE, providing greater flexibility compared to the current approach where you either have full manual control or rely entirely on auto-exposure.
fun setISOPriority() { // ... val availablePriorityModes = mStaticInfo.characteristics.get( CameraCharacteristics.CONTROL_AE_AVAILABLE_PRIORITY_MODES ) // ... // Turn on AE mode to set priority mode reqBuilder[CaptureRequest.CONTROL_AE_MODE] = CameraMetadata.CONTROL_AE_MODE_ON reqBuilder[CaptureRequest.CONTROL_AE_PRIORITY_MODE] = CameraMetadata.CONTROL_AE_PRIORITY_MODE_SENSOR_SENSITIVITY_PRIORITY reqBuilder[CaptureRequest.SENSOR_SENSITIVITY] = TEST_SENSITIVITY_VALUE val request: CaptureRequest = reqBuilder.build() // ... }
Android 16 adds camera support for fine color temperature and tint adjustments to better support professional video recording applications. White balance settings are currently controlled through CONTROL_AWB_MODE, which contains options limited to a preset list, such as Incandescent, Cloudy, and Twilight. The COLOR_CORRECTION_MODE_CCT enables the use of COLOR_CORRECTION_COLOR_TEMPERATURE and COLOR_CORRECTION_COLOR_TINT for precise adjustments of white balance based on the correlated color temperature.
fun setCCT() { // ... (Your existing code before this point) ... val colorTemperatureRange: Range<Int> = mStaticInfo.characteristics[CameraCharacteristics.COLOR_CORRECTION_COLOR_TEMPERATURE_RANGE] // Set to manual mode to enable CCT mode reqBuilder[CaptureRequest.CONTROL_AWB_MODE] = CameraMetadata.CONTROL_AWB_MODE_OFF reqBuilder[CaptureRequest.COLOR_CORRECTION_MODE] = CameraMetadata.COLOR_CORRECTION_MODE_CCT reqBuilder[CaptureRequest.COLOR_CORRECTION_COLOR_TEMPERATURE] = 5000 reqBuilder[CaptureRequest.COLOR_CORRECTION_COLOR_TINT] = 30 val request: CaptureRequest = reqBuilder.build() // ... (Your existing code after this point) ... }
Android 16 adds standard Intent actions — ACTION_MOTION_PHOTO_CAPTURE, and ACTION_MOTION_PHOTO_CAPTURE_SECURE — which request that the camera application capture a motion photo and return it.
You must either pass an extra EXTRA_OUTPUT to control where the image will be written, or a Uri through Intent setClipData. If you don't set a ClipData, it will be copied there for you when calling Context.startActivity.
Android 16 continues our work to deliver dazzling image quality with UltraHDR images. It adds support for UltraHDR images in the HEIC file format. These images will get ImageFormat type HEIC_ULTRAHDR and will contain an embedded gainmap similar to the existing UltraHDR JPEG format. We're working on AVIF support for UltraHDR as well, so stay tuned.
In addition, Android 16 implements additional parameters in UltraHDR from the ISO 21496-1 draft standard, including the ability to get and set the colorspace that gainmap math should be applied in, as well as support for HDR encoded base images with SDR gainmaps.
Android 16 adds RuntimeColorFilter and RuntimeXfermode, allowing you to author complex effects like Threshold, Sepia, and Hue Saturation and apply them to draw calls. Since Android 13, you've been able to use AGSL to create custom RuntimeShaders that extend Shaders. The new API mirrors this, adding an AGSL-powered RuntimeColorFilter that extends ColorFilters, and a Xfermode effect that allows you to implement AGSL-based custom compositing and blending between source and destination pixels.
private val thresholdEffectString = """ uniform half threshold; half4 main(half4 c) { half luminosity = dot(c.rgb, half3(0.2126, 0.7152, 0.0722)); half bw = step(threshold, luminosity); return bw.xxx1 * c.a; }""" fun setCustomColorFilter(paint: Paint) { val filter = RuntimeColorFilter(thresholdEffectString) filter.setFloatUniform(0.5) paint.colorFilter = filter }
With every Android release, we seek to make the platform more efficient, privacy conscious, internationalization friendly, and robust, balancing the needs of apps against hardware support, system performance, user privacy, and battery life. This can result in behavior changes that impact compatibility.
Android 15 enforced edge-to-edge for apps targeting Android 15 (SDK 35), but your app could opt-out by setting R.attr#windowOptOutEdgeToEdgeEnforcement to true. Once your app targets Android 16 (Baklava), R.attr#windowOptOutEdgeToEdgeEnforcement is deprecated and disabled and your app cannot opt-out of going edge-to-edge. To be compatible with Android 16 Beta 2, ensure your app supports edge-to-edge and remove any use of R.attr#windowOptOutEdgeToEdgeEnforcement. To support edge-to-edge, see the Compose and Views guidance. Please let us know about concerns in our tracker on the feedback page.
For apps targeting Android 16 or higher, BODY_SENSORS permissions are transitioning to the granular permissions under android.permissions.health also used by Health Connect. Any API previously requiring BODY_SENSORS or BODY_SENSORS_BACKGROUND will now require the corresponding android.permissions.health permission. This affects the following data types, APIs, and foreground service types:
If your app uses these APIs, it should now request the respective granular permissions:
These permissions are the same as those that guard access to reading data from Health Connect, the Android datastore for health, fitness, and wellness data.
An abandoned job occurs when the JobParameters object associated with the job has been garbage collected, but jobFinished has not been called to signal job completion. This indicates that the job may be running and being rescheduled without the application's awareness.
Applications in Android 16 that rely on JobScheduler without maintaining a strong reference to the JobParameters object will now be granted the new job stop reason STOP_REASON_TIMEOUT_ABANDONED on timeout, instead of STOP_REASON_TIMEOUT.
If there are frequent occurrences of the new abandoned stop reason, the system will take mitigation steps to reduce job frequency. Please use the new stop reason to detect and reduce abandoned jobs.
Note: If you're using WorkManager, you're not expected to be impacted by this change — one nice side effect of using Android Jetpack to schedule your work.
Android 16 introduces default security hardening against Intent redirection attacks regardless of your app's targetSDK version. The removeLaunchSecurityProtection API allows you to opt-out of this protection if your testing reveals issues.
Note: Opting out of security protections should be done with caution and only when absolutely necessary, as it can increase the risk of security vulnerabilities.
val iSublevel = intent.getParcelableExtra("sub_intent", Intent::class.java) iSublevel?.let { it.removeLaunchSecurityProtection() startActivity(it) }
Apps targeting Android 15 (API level 35) have the elegantTextHeight TextView attribute set to true by default, replacing the compact font with one that is much more readable. You could override this by setting the elegantTextHeight attribute to false.
Android 16 deprecates the elegantTextHeight attribute, and the attribute will be ignored once your app targets Android 16. The “UI fonts” controlled by these APIs are being discontinued, so you should adapt any layouts to ensure consistent and future proof text rendering in Arabic, Lao, Myanmar, Tamil, Gujarati, Kannada, Malayalam, Odia, Telugu or Thai.
Android 15 introduced support for 16KB memory pages to optimize performance of the platform. Android 16 adds a compatibility mode, allowing some apps built for 4K memory pages to run on a device configured for 16KB memory pages.
If Android detects that your app has 4KB aligned memory pages, it will automatically use compatibility mode and display a notification dialog to the user. Setting the android:pageSizeCompat property in the AndroidManifest.xml to enable the backwards compatibility mode will prevent the display of the dialog when your app launches. For best performance, reliability, and stability, your app should still be 16KB aligned. Read our recent blog post about updating your apps to support 16KB memory pages for more details.
Users can now customize their measurement system in regional preferences within Settings. The user preference is included as part of the locale code, so you can register a BroadcastReceiver on ACTION_LOCALE_CHANGED to handle locale configuration changes when regional preferences change.
Using formatters can help match the local experience. For example, "0.5 in" in English (United States), is "12,7 mm" for a user who has set their phone to English (Denmark) or who uses their phone in English (United States) with the metric system as the measurement system preference.
To find these settings in Android 16 Beta 2, open the Settings app and navigate to System > Languages & region.
In Android 16, the live wallpaper framework is gaining a new content API to address the challenges of dynamic, user-driven wallpapers. Currently, live wallpapers incorporating user-provided content require complex, service-specific implementations. Android 16 introduces WallpaperDescription and WallpaperInstance. WallpaperDescription allows you to identify distinct instances of a live wallpaper from the same service. For example, a wallpaper that has instances on both the home screen and on the lock screen may have unique content in both places. The wallpaper picker and WallpaperManager use this metadata to better present wallpapers to users, streamlining the process for you to create diverse and personalized live wallpaper experiences.
The SystemHealthManager introduces the getCpuHeadroom and getGpuHeadroom APIs, designed to provide games and resource-intensive apps with estimates of available CPU and GPU resources. These methods offer a way for you to gauge how your app or game can best improve system health, particularly when used in conjunction with other Android Dynamic Performance Framework (ADPF) APIs that detect thermal throttling. By using CpuHeadroomParams and GpuHeadroomParams on supported devices, you will be able to customize the time window used to compute the headroom and select between average or minimum resource availability. This can help you reduce your CPU or GPU resource usage accordingly, leading to better user experiences and improved battery life.
Android 16 adds APIs that support sharing access to Android Keystore keys with other apps. The new KeyStoreManager class supports granting and revoking access to keys by app uid, and includes an API for apps to access shared keys.
The new MediaQuality package in Android 16 exposes a set of standardized APIs for access to audio and picture profiles and hardware-related settings. This allows streaming apps to query profiles and apply them to media dynamically:
The API allows apps to switch between profiles and users to enjoy the benefits of tuning supported TVs to best suit their content.
Android 16 adds additional APIs to enhance UI semantics that help improve consistency for users that rely on accessibility services, such as TalkBack.
Android 16 extends TtsSpan with a TYPE_DURATION, consisting of ARG_HOURS, ARG_MINUTES, and ARG_SECONDS. This allows you to directly annotate time duration, ensuring accurate and consistent text-to-speech output with services like TalkBack.
Android currently allows UI elements to derive their accessibility label from another, and now offers the ability for multiple labels to be associated, a common scenario in web content. By introducing a list-based API within AccessibilityNodeInfo, Android can directly support these multi-label relationships. As part of this change, we've deprecated AccessibilityNodeInfo setLabeledBy and getLabeledBy in favor of addLabeledBy, removeLabeledBy, and getLabeledByList.
Android 16 adds accessibility APIs that allow you to convey the expanded or collapsed state of interactive elements, such as menus and expandable lists. By setting the expanded state using setExpandedState and dispatching TYPE_WINDOW_CONTENT_CHANGED AccessibilityEvents with a CONTENT_CHANGE_TYPE_EXPANDED content change type, you can ensure that screen readers like TalkBack announce state changes, providing a more intuitive and inclusive user experience.
Android 16 adds RANGE_TYPE_INDETERMINATE, giving a way for you to expose RangeInfo for both determinate and indeterminate ProgressBar widgets, allowing services like TalkBack to more consistently provide feedback for progress indicators.
The new AccessibilityNodeInfo getChecked and setChecked(int) methods in Android 16 now support a "partially checked" state in addition to "checked" and "unchecked." This replaces the deprecated boolean isChecked and setChecked(boolean).
This preview is for the next major release of Android with a planned launch in Q2 of 2025 and we plan to have another release with new developer APIs in Q4. The Q2 major release will be the only release in 2025 to include behavior changes that could affect apps. The Q4 minor release will pick up feature updates, optimizations, and bug fixes; like our non-SDK quarterly releases, it will not include any intentional app-impacting behavior changes.
We'll continue to have quarterly Android releases. The Q1 and Q3 updates provide incremental updates to ensure continuous quality. We’re putting additional energy into working with our device partners to bring the Q2 release to as many devices as possible.
There’s no change to the target API level requirements and the associated dates for apps in Google Play; our plans are for one annual requirement each year, tied to the major API level.
In addition to performing compatibility testing on this next major release, make sure that you're compiling your apps against the new SDK, and use the compatibility framework to enable targetSdkVersion-gated behavior changes as they become available for early testing.
The Android 16 Preview program runs from November 2024 until the final public release in Q2 of 2025. At key development milestones, we'll deliver updates for your development and testing environments. Each update includes SDK tools, system images, emulators, API reference, and API diffs. We'll highlight critical APIs as they are ready to test in the preview program in blogs and on the Android 16 developer website.
We’re targeting March of 2025 for our Platform Stability milestone. At this milestone, we’ll deliver final SDK/NDK APIs and also final internal APIs and app-facing system behaviors. From that time you’ll have several months before the final release to complete your testing. Learn more by checking the release timeline details.
You can enroll any supported Pixel device to get this and future Android Beta updates over-the-air. If you don’t have a Pixel device, you can use the 64-bit system images with the Android Emulator in Android Studio. If you are currently on Android 16 Beta 1 or are already in the Android Beta program, you will be offered an over-the-air update to Beta 2.
We're looking for your feedback so please report issues and submit feature requests on the feedback page. The earlier we get your feedback, the more we can include in our work on the final release.
For the best development experience with Android 16, we recommend that you use the latest preview of Android Studio (Meerkat). Once you’re set up, here are some of the things you should do:
We’ll update the beta system images and SDK regularly throughout the Android 16 release cycle. Once you’ve installed a beta build, you’ll automatically get future updates over-the-air for all later previews and Betas.
For complete information, visit the Android 16 developer site.
Android Studio isn't just code and algorithms – it's built by real people with fascinating stories. Our "Meet the Android Studio Team" series gives you a glimpse into the lives and passions of the talented individuals who craft the tools you use every day. Tune in each month to meet new team members and discover their unique journey.
Meet Trevor Johns, a seasoned Staff Developer Programs Engineer at Google.
Reflecting on his journey, Trevor sheds light on the most impactful advancements in the Android ecosystem and offers a glimpse into his vision for the future where AI plays a pivotal role in streamlining development workflows.
Trevor discusses the Android Studio team's dedication to enhancing developer productivity through AI, highlighting their focus on understanding and addressing developer needs, and reflects on the dynamic journey of Android development while sharing valuable insights.
I've been at Google in various roles since Google since 2007, and transferred to Android team in 2009 shortly after the launch of the HTC G1 — the first publicly available Android phone. Even in those early days it was clear that mobile computing was a unique opportunity to reimagine many of the limitations of desktop computers and how users interact with the digital world.
Among my first projects were helping developers optimize their apps for the MyTouch 3G and Motorola Droid, as well as creating developer resources for Android's 1.6 Donut release.
Over the years, I've worked on various parts of the Android OS including our first tablet devices, Android Wear, helping develop the original Android support libraries (which later became Jetpack), and the migration to Kotlin.
Recently I joined the Android Studio team to help improve developer productivity, using AI to streamline common developer tasks and help developers have more time to focus on creativity.
Like the rest of Android, we approach development of new features by listening to our developer community. We hold regular listening sessions with publishers, work with our UX research team to conduct case studies, and participate in online discussions to get a sense for where developers face the most friction — and then try to find ways to reduce that friction.
For example, we developed Gemini in Android Studio's integration with Play Vitals and Firebase Crashlytics based on feedback from members of the developer community who commented to let us know where they would find AI most useful across their developer workflow.
Speaking of, if you'd like to provide us with feedback, you can always file a bug or feature request on the Android Studio issue tracker.
In addition to listening to the Android community, we also keep an eye on what's being developed across the rest of the Android team and make sure that Android Studio has the right tools to help developers quickly migrate between Android versions and adopt those new platform features.
Beyond that, the Studio team provides leading edge editing tools to make sure that Android remains one of the easiest computing platforms to develop for — unlocking this unique computing platform for millions of developers.
For developers, my answer would have to be the migration to Kotlin. This language has modernized the Android developer experience — letting developers write apps with less code and fewer errors. It's also the foundation for Jetpack Compose, which is the future of Android UI development.
I'd love to see Gemini be able to not just autocomplete code for me, but generate scaffolds for new projects. That way I can focus on building features rather than worrying about basic structure when starting a new project.
Follow Trevor's lead and embrace the power of Kotlin for modern Android development. Enhance your skills and write better Android apps faster with Kotlin.
Get ready for another inspiring story! The "Meet the Android Studio Team" series continues next week with a new team member in the spotlight. Don't miss their unique insights and journey.
Find Trevor Johns on LinkedIn, X, Bluesky, and Medium.
Accurate time is crucial for a wide variety of app functionalities, from scheduling and event management to transaction logging and security protocols. However, a user can change the device’s time, so a more accurate source of time than the device’s local system time may be required. That's why we're introducing the TrustedTime API that leverages Google's infrastructure to deliver a trustworthy timestamp, independent of the device's potentially manipulated local time settings.
The new API leverages Google's secure infrastructure to provide a trusted time source to your app. TrustedTime periodically syncs its clock to Google's servers, which have access to a highly accurate time source, so that you do not need to make a server request every time you want to know the current network time. Additionally, we've integrated a unique model that calculates the device's clock drift. This will inform you when the time may be inaccurate between network synchronizations.
Many apps rely on the device's clock for various features. However, users can change their device's time settings, either intentionally or unintentionally, therefore changing the time that your app gets. This can lead to problems such as:
The TrustedTime API opens up a range of possibilities for enhancing the reliability and security of your apps, with use cases in areas such as:
The TrustedTime API is built on top of Google Play services, making integration seamless for most Android developers.
The simplest way to integrate is to initialize the TrustedTimeClient early in your app lifecycle, such as in the onCreate() method of your Application class. The following example uses dependency injection with Hilt to make the time client available to components throughout the app.
// TrustedTimeClientAccessor.kt import com.google.android.gms.tasks.Task import com.google.android.gms.time.TrustedTimeClient interface TrustedTimeClientAccessor { fun createClient(): Task<TrustedTimeClient> } // TrustedTimeModule.kt @Module @InstallIn(SingletonComponent::class) class TrustedTimeModule { @Provides fun provideTrustedTimeClientAccessor( @ApplicationContext context: Context ): TrustedTimeClientAccessor { return object : TrustedTimeClientAccessor { override fun createClient(): Task<TrustedTimeClient> { return TrustedTime.createClient(context) } } } }
// TrustedTimeDemoApplication.kt @HiltAndroidApp class TrustedTimeDemoApplication : Application() { @Inject lateinit var trustedTimeClientAccessor: TrustedTimeClientAccessor var trustedTimeClient: TrustedTimeClient? = null private set override fun onCreate() { super.onCreate() trustedTimeClientAccessor.createClient().addOnCompleteListener { task -> if (task.isSuccessful) { // Stash the client trustedTimeClient = task.result } else { // Handle error, maybe retry later val exception = task.exception } } // To use Kotlin Coroutine, you can use the await() method, // see https://developers.google.com/android/guides/tasks#kotlin_coroutine for more info. } } NOTE: If you don't use dependency injection in your app. You can simply call `TrustedTime.createClient(context)` instead of using a TrustedTimeClientAccessor.
// Retrieve the TrustedTimeClient from your application class val myApp = applicationContext as TrustedTimeDemoApplication // In this example, System.currentTimeMillis() is used as a fallback if the // client is null (i.e. client creation task failed) or when there is no time // signal available. You may not want to do this if using the system clock is // not suitable for your use case. val currentTimeMillis = myApp.trustedTimeClient?.computeCurrentUnixEpochMillis() ?: System.currentTimeMillis() // trustedTimeClient.computeCurrentInstant() can be used if Instant is // preferred to long for Unix epoch times and you are able to use the APIs.
@AndroidEntryPoint class MainActivity : AppCompatActivity() { @Inject lateinit var trustedTimeAccessor: TrustedTimeAccessor private var trustedTimeClient: TrustedTimeClient? = null override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) ... trustedTimeAccessor.createClient().addOnCompleteListener { task -> if (task.isSuccessful) { // Stash the client trustedTimeClient = task.result } else { // Handle error, maybe retry later or use another time source. val exception = task.exception } } } private fun getCurrentTimeInMillis() : Long? { return trustedTimeClient?.computeCurrentUnixEpochMillis() } }
The TrustedTime API is available on all devices running Google Play services on Android 5 (Lollipop) and above. You need to add the dependency com.google.android.gms:play-services-time:16.0.1 (or above) to access the new API. No additional permission is required to use this API. However, TrustedTime needs an internet connection after the device starts up to provide timestamps. If the device hasn't connected to the internet since booting, the TrustedTime APIs won't return timestamps.
It’s important to note that the device's internal clock can drift due to factors like temperature, doze mode, and battery level. TrustedTime doesn't prevent this drift, but its APIs provide an error estimate for each timestamp. Use this estimate to determine if the timestamp's accuracy meets your application's requirements. While TrustedTime makes it more difficult for users to manipulate the time accessed by your app, it does not guarantee complete safety. Advanced techniques can still be used to tamper with the device’s time.
To learn more about the TrustedTime API, check out the following resources: