Tag Archives: ios

Upcoming security changes to Google’s OAuth 2.0 authorization endpoint in embedded webviews

Posted by Badi Azad, Group Product Manager (@badiazad)

The Google Identity team is continually working to improve Google Account security and create a safer and more secure experience for our users. As part of that work, we recently introduced a new secure browser policy prohibiting Google OAuth requests in embedded browser libraries commonly referred to as embedded webviews. All embedded webviews will be blocked starting on September 30, 2021.

Embedded webview libraries are problematic because they allow a nefarious developer to intercept and alter communications between Google and its users by acting as a "man in the middle." An application embedding a webview can modify or intercept network requests, insert custom scripts that can potentially record every keystroke entered in a login form, access session cookies, or alter the content of the webpage. These libraries also allow the removal of key elements of a browser that hold user trust, such as the guarantee that the response originates from Google's servers, display of the website domain, and the ability to inspect the security of a connection. Additionally the OAuth 2.0 for Native Apps guidelines from IETF require that native apps must not use embedded user-agents such as webviews to perform authorization requests.

Embedded webviews not only affect account security, they could affect usability of your application. The sandboxed storage environment of an embedded webview disconnects a user from the single sign-on features they expect from Google. A full-featured web browser supports multiple tools to help a logged-out user quickly sign-in to their account including password managers and Web Authentication libraries. Google's users also expect multiple-step login processes, including two-step verification and child account authorizations, to function seamlessly when a login flow involves multiple devices, when switching to another app on the device, or when communicating with peripherals such as a security key.

Instructions for impacted developers

Developers must register an appropriate OAuth client for each platform (Desktop, Android, iOS, etc.) on which your app will run, in compliance with Google's OAuth 2.0 Policies. You can verify the OAuth client ID used by your installed application is the most appropriate choice for your platform by visiting the Google API Console's Credentials page. A "Web application" client type in use by an Android application is an example of mismatched use. Reference our OAuth 2.0 for Mobile & Desktop Apps guide to properly integrate the appropriate client for your app's platform.

Applications opening all links and URLs inside an embedded webview should follow the following instructions for Android, iOS, macOS, and captive portals:

Android

Embedded webviews implementing or extending Android WebView do not comply with Google's secure browser policy for its OAuth 2.0 Authorization Endpoint. Apps should allow general, third-party links to be handled by the default behaviors of the operating system, enabling a user's preferred routing to their chosen default web browser or another developer's preferred routing to its installed app through Android App Links. Apps may alternatively open general links to third-party sites in Android Custom Tabs.

iOS & macOS

Embedded webviews implementing or extending WKWebView, or the deprecated UIWebView, do not comply with Google's secure browser policy for its OAuth 2.0 Authorization Endpoint. Apps should allow general, third-party links to be handled by the default behaviors of the operating system, enabling a user's preferred routing to their chosen default web browser or another developer's preferred routing to its installed app through Universal Links. Apps may alternatively open general links to third-party sites in SFSafariViewController.

Captive portals

If your computer network intercepts network requests, redirecting to a web portal supporting authorization with a Google Account, your web content could be displayed in an embedded webview controlled by a captive network assistant. You should provide potential viewers instructions on how to access your network using their default web browser. For more information reference the Google Account Help article Sign in to a Wi-Fi network with your Google Account.

New IETF standards adopted by Android and iOS may help users access your captive pages in a full-featured web browser. Captive networks should integrate Captive-Portal Identification in DHCP and Router Advertisements (RAs) proposed IETF standard to inform clients that they are behind a captive portal enforcement device when joining the network, rather than relying on traffic interception. Networks should also integrate the Captive Portal API proposed IETF standard to quickly direct clients to a required portal URL to access the Internet. For more information reference Captive portal API support for Android and Apple's How to modernize your captive network developer articles.

Test for compatibility

If you're a developer that currently uses an embedded webview for Google OAuth 2.0 authorization flows, be aware that embedded webviews will be blocked as of September 30, 2021. To verify whether the authorization flow launched by your application is affected by these changes, test your application for compatibility and compliance with the policies outlined in this post.

You can add a query parameter to your authorization request URI to test for potential impact to your application before September 30, 2021. The following steps describe how to adjust your current requests to Google's OAuth 2.0 Authorization Endpoint to include an additional query parameter for testing purposes.

  1. Go to where you send requests to Google's OAuth 2.0 Authorization Endpoint. Example URI: https://accounts.google.com/o/oauth2/v2/auth
  2. Add the disallow_webview parameter with a value of true to the query component of the URI. Example: disallow_webview=true

An implementation affected by the planned changes will see a disallowed_useragent error when loading Google's OAuth 2.0 Authorization Endpoint, with the disallow_webview=true query string, in an embedded webview instead of the authorization flows currently displayed. If you do not see an error message while testing the effect of the new embedded webview policies your app's implementation might not be impacted by this announcement.

Note: A website's ability to request authorization from a Google Account may be impacted due to another developer's decision to use an embedded webview in their app. For example, if a messaging or news application opens links to your site in an embedded webview, the features available on your site, including Google OAuth 2.0 authorization flows, may be impacted. If your site or app is impacted by the implementation choice of another developer please contact that developer directly.

User-facing warning message

A warning message may be displayed in non-compliant authorization requests after August 30, 2021. The warning message will include the user support email defined in your project's OAuth consent screen in Google API Console and direct the user to visit our Sign in with a supported browser support article.

A screenshot showing an example Google OAuth authorization dialog including a warning message: To help protect your account, Google will soon block apps that don't comply with Google's embedded webview policy. You can let the app developer (moo@gmail.com) know that this app should stop using embedded webviews

Developers may acknowledge the upcoming enforcement and suppress the warning message by passing a specific query parameter to the authorization request URI. The following steps explain how to adjust your authorization requests to include the acknowledgement parameter:

  1. Go to where you send requests to Google's OAuth 2.0 Authorization Endpoint. Example URI: https://accounts.google.com/o/oauth2/v2/auth
  2. Add an ack_webview_shutdown parameter with a value of the enforcement date: 2021-09-30. Example: ack_webview_shutdown=2021-09-30

A successful request to Google's OAuth 2.0 Authorization Endpoint including the acknowledgement query parameter and enforcement date will suppress the warning message in non-compliant authorization requests. All non-compliant authorization requests will display a disallowed_useragent error when loading Google's OAuth 2.0 Authorization Endpoint after the enforcement date.

Related content

New iOS Data Protection setting protects data sharing between Google Workspace and personal Google accounts

What’s changing 

We’re adding a new admin setting which restricts data and content sharing between Google Workspace accounts and personal Google accounts in Gmail, Drive, Docs, Sheets, and Slides on iOS. 

When the data protection setting is enabled, users can only share or save content–such as files, emails, or copied & pasted content—within Workspace accounts. This will protect users from sharing a file with their personal Google accounts or saving a file to their personal Google Drive. 



Who’s impacted 

Admins and end users 


Why it’s important 

Google applications on iOS support multi-user logins, allowing users to access Gmail, Google Drive, Docs, Sheets, and Slides with their personal and Google Workspace accounts. Giving admins the ability to control how data is shared across user accounts helps minimize accidental data sharing. Together with the previously released copy and paste and drag and drop restrictions, these security measures help increase the security of your corporate data on iOS. 


Getting started 

  • Admins: This feature will be OFF by default and can be enabled at the OU and domain level. Visit the Help Center to learn more about applying settings for iOS devices

  • End users: There is no end user setting for this feature. When enabled by your admin, you will be able to securely share enterprise Google Workspace content between your Google Workspace apps. 

Rollout pace 

  • Rapid Release and Scheduled Release domains: This feature is available now for all users. 

Availability 

  • Available to Google Workspace Enterprise Standard, Enterprise Plus, and Education Plus customers 
  • Not available to Google Workspace Essentials, Business Starter, Business Standard, Business Plus, Enterprise Essentials, Education Plus, Frontline, and Nonprofits, as well as G Suite Basic and Business customers 

Resources 

Office editing on iOS brings Google Workspace collaboration to Microsoft Office files

Quick launch summary 

We’re making Office editing available on iOS. This feature brings the collaborative and assistive features of Google Workspace to your Microsoft Office files when you’re using your iOS device. Already available on the web and Android, it: 
  • Allows you to edit, comment, and collaborate on Microsoft Office files using Google Docs’, Sheets’, and Slides’ powerful real-time collaboration tools. 
  • Improves sharing options, improves sharing controls, and reduces the need to download and email file attachments. 
  • Streamlines workflows by reducing the need to convert file types. 

Office editing will replace Quickoffice (sometimes known as Office Compatibility Mode), which has more limited functionality and collaboration capabilities. See more about Office editing in our announcement for the feature on the web

Getting started 

Rollout pace 

  • This feature is available now for all users. 

Availability 

  • Available to Google Workspace Essentials, Business Starter, Business Standard, Business Plus, Enterprise Essentials, Enterprise Standard, and Enterprise Plus as well as G Suite Basic, Business, Education, Enterprise for Education, and Nonprofits customers 
  • Available to users with personal Google Accounts 

Resources 

New look and feel for Google Meet mobile apps

Quick launch summary

We’re updating the user interface (UI) of the Google Meet mobile apps for Android and iOS. The new mobile UI will have the same look and feel as that of the meeting experience in the Gmail app.

A new look and feel for Google Meet mobile apps

In addition to the revamped design, you’ll now see a New Meeting button. When you tap on this button, you’ll see three options:
  • Get meeting joining info to share with others.
  • Start a Meet call instantly.
  • Schedule a new meeting in Google Calendar.

New options for new meetings in Google Meet mobile apps

This new UI is rolling out on iOS now; check back for an update on this post when the rollout to Android begins.

Getting started

  • Admins: There is no admin control for this feature.
  • End users: This new UI will appear by default once you’ve upgraded your Meet iOS app to version 45 or above.

Rollout pace

iOS
Android

Availability

  • Available to all G Suite customers and users with personal Google Accounts

Resources

ML Kit Pose Detection Makes Staying Active at Home Easier

Posted by Kenny Sulaimon, Product Manager, ML Kit; Chengji Yan and Areeba Abid, Software Engineers, ML Kit

ML Kit logo

Two months ago we introduced the standalone version of the ML Kit SDK, making it even easier to integrate on-device machine learning into mobile apps. Since then we’ve launched the Digital Ink Recognition API, and also introduced the ML Kit early access program. Our first two early access APIs were Pose Detection and Entity Extraction. We’ve received an overwhelming amount of interest in these new APIs and today, we are thrilled to officially add Pose Detection to the ML Kit lineup.

ML Kit Overview

A New ML Kit API, Pose Detection


Examples of ML Kit Pose Detection

ML Kit Pose Detection is an on-device, cross platform (Android and iOS), lightweight solution that tracks a subject's physical actions in real time. With this technology, building a one-of-a-kind experience for your users is easier than ever.

The API produces a full body 33 point skeletal match that includes facial landmarks (ears, eyes, mouth, and nose), along with hands and feet tracking. The API was also trained on a variety of complex athletic poses, such as Yoga positions.

Skeleton image detailing all 33 landmark points

Skeleton image detailing all 33 landmark points

Under The Hood

Diagram of the ML Kit Pose Detection Pipeline

The power of the ML Kit Pose Detection API is in its ease of use. The API builds on the cutting edge BlazePose pipeline and allows developers to build great experiences on Android and iOS, with little effort. We offer a full body model, support for both video and static image use cases, and have added multiple pre and post processing improvements to help developers get started with only a few lines of code.

The ML Kit Pose Detection API utilizes a two step process for detecting poses. First, the API combines an ultra-fast face detector with a prominent person detection algorithm, in order to detect when a person has entered the scene. The API is capable of detecting a single (highest confidence) person in the scene and requires the face of the user to be present in order to ensure optimal results.

Next, the API applies a full body, 33 landmark point skeleton to the detected person. These points are rendered in 2D space and do not account for depth. The API also contains a streaming mode option for further performance and latency optimization. When enabled, instead of running person detection on every frame, the API only runs this detector when the previous frame no longer detects a pose.

The ML Kit Pose Detection API also features two operating modes, “Fast” and “Accurate”. With the “Fast” mode enabled, you can expect a frame rate of around 30+ FPS on a modern Android device, such as a Pixel 4 and 45+ FPS on a modern iOS device, such as an iPhone X. With the “Accurate” mode enabled, you can expect more stable x,y coordinates on both types of devices, but a slower frame rate overall.

Lastly, we’ve also added a per point “InFrameLikelihood” score to help app developers ensure their users are in the right position and filter out extraneous points. This score is calculated during the landmark detection phase and a low likelihood score suggests that a landmark is outside the image frame.

Real World Applications


Examples of a pushup and squat counter using ML Kit Pose Detection

Keeping up with regular physical activity is one of the hardest things to do while at home. We often rely on gym buddies or physical trainers to help us with our workouts, but this has become increasingly difficult. Apps and technology can often help with this, but with existing solutions, many app developers are still struggling to understand and provide feedback on a user’s movement in real time. ML Kit Pose Detection aims to make this problem a whole lot easier.

The most common applications for Pose detection are fitness and yoga trackers. It’s possible to use our API to track pushups, squats and a variety of other physical activities in real time. These complex use cases can be achieved by using the output of the API, either with angle heuristics, tracking the distance between joints, or with your own proprietary classifier model.

To get you jump started with classifying poses, we are sharing additional tips on how to use angle heuristics to classify popular yoga poses. Check it out here.

Learning to Dance Without Leaving Home

Learning a new skill is always tough, but learning to dance without the aid of a real time instructor is even tougher. One of our early access partners, Groovetime, has set out to solve this problem.

With the power of ML Kit Pose Detection, Groovetime allows users to learn their favorite dance moves from popular short-form dance videos, while giving users automated real time feedback on their technique. You can join their early access beta here.

Groovetime App using ML Kit Pose Detection

Staying Active Wherever You Are

Our Pose Detection API is also helping adidas Training, another one of our early access partners, build a virtual workout experience that will help you stay active no matter where you are. This one-of-a-kind innovation will help analyze and give feedback on the user’s movements, using nothing more than just your phone. Integration into the adidas Training app is still in the early phases of the development cycle, but stay tuned for more updates in the future.

How to get started?

If you would like to start using the Pose Detection API in your mobile app, head over to the developer documentation or check out the sample apps for Android and iOS to see the API in action. For questions or feedback, please reach out to us through one of our community channels.

Google Docs mobile improvements: link previews and Smart Compose

What’s changing 

We’re improving the Android and iOS experiences for Google Docs users with two new features. These were previously available on the web, and are now available on mobile as well: 
  • Link previews, which help you get context from linked content without bouncing between apps and screens. 
  • Smart Compose, which helps you write faster and with more confidence. 

Read our Cloud Blog post to learn more about how these and other launches can help you collaborate from anywhere, with Google Docs, Sheets, and Slides on mobile


Who’s impacted 

End users 


Why it’s important 

These launches build on other recent launches that improve the mobile user experience, including a new commenting interface in Docs on Android, dynamic email notifications for Gmail on mobile, and dark mode for Docs, Sheets, and Slides on Android

Together, these features will help make it easier and quicker not only to read and review content on mobile devices, but also to create and collaborate on content, wherever you are. 


Additional details 

Link previews 
Linked content can enrich documents with useful information, but if clicking a link means opening another window, that can be distracting and disrupt your reading flow. Earlier this year, we launched link previews on the web. Now, we’re adding link previews to mobile as well. When you click on a link in Docs, dynamic information about the content will appear. This may include the title, description, and thumbnail images from public web pages, or the owner and latest activity for linked Drive files. This can help you decide whether to open linked content while staying in-context. 

Preview links in Google Docs on the web 


Preview links in Google Docs on mobile devices 

Smart Compose 

Smart Compose on mobile will help you write documents faster and reduce the chance of spelling and grammatical errors when working on the go. When a Smart Compose suggestion appears, simply swipe right to accept it. See more in our announcement for the feature on the web

Getting started 

Admins: These features will be ON by default. There are no admin controls for them. 

End users: 
  • Link previews: This feature will be on by default. There is no setting to control the feature. 
  • Smart Compose: This feature may be on or off depending on whether you have turned it on or off on the web. When enabled, you’ll automatically see suggestions; swipe right to accept a suggestion. Visit the Help Center to learn more about using Smart Compose in Google Docs

Rollout pace 

Link previews in Docs, iOS and Web 
Link previews in Docs, Android 
Smart Compose in Docs, iOS 
Smart Compose in Docs, Android 

Availability 

  • Link previews in Docs: Available to all G Suite customers and users with personal accounts. 
  • Smart Compose in Docs: Available to all G Suite customers. Not available to users with personal accounts. 

Resources 

Simplify management of company-owned iOS devices with new Apple Business Manager integration

What’s changing 

We’re launching an integration between Google endpoint management and Apple Business Manager (formerly the Device Enrollment Program, or DEP). This makes it possible to securely distribute and manage company-owned iOS devices from the Google Admin console. 

The integration will enable G Suite Enterprise, G Suite Enterprise for Education, G Suite Enterprise Essentials, and Cloud Identity Premium customers to set Google endpoint management as an MDM server on Apple Business Manager. 


Who’s impacted 

Admins 


Why you’d use it 

With the integration between Google endpoint management and Apple Business Manager: 
  • Admins can manage company-owned iOS devices directly from the Admin console, in the same location as they manage other devices that access their organization’s data. 
  • Admins can control a wider range of features including app installation, Apple app usage, authentication methods, and more, as shown in this table of supervised company-owned iOS device settings
  • Apple Business Manager and Google endpoint management automatically sync for seamless device management. 
  • Users follow a simple device setup and enrollment through the built-in setup wizard. 
Apple Business Manager setup in the Admin console



Getting started 

  • Admins: To use this feature, you need to enable advanced mobile management for iOS devices in applicable OUs, and have an Apple Business Manager account set up. Visit our Help Center to learn more about how to set up company-owned iOS device management
  • End users: There is no end user setting for this feature. Once provisioned by an admin, users can follow the device setup wizard steps to enroll the device. Once the setup wizard is complete, the Google Device Policy app will automatically install and the user should sign in to it with their G Suite or Cloud Identity account. 

Rollout pace 

Availability 

  • Available to G Suite Enterprise, G Suite Enterprise for Education, G Suite Enterprise Essentials, and Cloud Identity Premium customers 
  • Not available to G Suite Basic, G Suite Business, G Suite for Education, G Suite for Nonprofits, and G Suite Essentials customers 

Resources 

Update your G Suite mobile and desktop apps before August 12, 2020, to ensure they continue working

Quick summary

In 2018, we began making changes to our API and service infrastructure to improve performance and security. As a result of these changes, some older versions of G Suite desktop and mobile apps may stop working on August 12, 2020. In particular, versions released prior to December 2018 may be impacted.

To ensure their workflows are not disrupted, your users should update the following Google apps to the latest versions as soon as possible:

Getting started
  • Admins: Encourage your users to upgrade their apps. If you deploy Drive File Stream to your organization, ensure you’re using the latest version.
  • End users: Upgrade the apps listed above to the latest versions as soon as possible.
Rollout pace

Availability
  • This impacts all G Suite customers and users with personal Google accounts.

On-device machine learning solutions with ML Kit, now even easier to use

Posted by Christiaan Prins, Product Manager, ML Kit and Shiyu Hu, Tech Lead Manager, ML Kit

ML Kit logo

Two years ago at I/O 2018 we introduced ML Kit, making it easier for mobile developers to integrate machine learning into your apps. Today, more than 25,000 applications on Android and iOS make use of ML Kit’s features. Now, we are introducing some changes that will make it even easier to use ML Kit. In addition, we have a new feature and a set of improvements we’d like to discuss.

A new ML Kit SDK, fully focused on on-device ML

ML Kit API Overview

ML Kit's APIs are built to help you tackle common challenges in the Vision and Natural Language domains. We make it easy to recognize text, scan barcodes, track and classify objects in real-time, do translation of text, and more.

The original version of ML Kit was tightly integrated with Firebase, and we heard from many of you that you wanted more flexibility when implementing it in your apps. As a result, we are now making all the on-device APIs available in a new standalone ML Kit SDK that no longer requires a Firebase project. You can still use both ML Kit and Firebase to get the best of both products if you choose to.

With this change, ML Kit is now fully focused on on-device machine learning, giving you access to the unique benefits that on-device versus cloud ML offers:

  • It’s fast, unlocking real-time use cases- since processing happens on the device, there is no network latency. This means, we can do inference on a stream of images / video or multiple times a second on text strings.
  • Works offline - you can rely on our APIs even when the network is spotty or your app’s end-user is in an area without connectivity.
  • Privacy is retained: since all processing is performed locally, there is no need to send sensitive user data over the network to a server.

Naturally, you still get access to Google’s on-device models and processing pipelines, all accessible through easy-to-use APIs, and offered at no cost.

All ML Kit resources can now be found on our new website where we made it a lot easier to access sample apps, API reference docs and our community channels that are there to help you if you have questions.

Object detection & tracking gif Text recognition + Language ID + Translate gif

What does this mean if I already use ML Kit today?

If you are using ML Kit for Firebase’s on-device APIs in your app today, we recommend you to migrate to the new standalone ML Kit SDK to benefit from new features and updates. For more information and step-by-step instructions to update your app, please follow our Migration guide. The cloud-based APIs, model deployment and AutoML Vision Edge remain available through Firebase Machine Learning.

Shrink your app footprint with Google Play Services

Apart from making ML Kit easier to use, developers also asked if we can ship ML Kit through Google Play Services resulting in a smaller app footprint and the model can be reused between apps. Apart from Barcode scanning and Text recognition, we have now added Face detection / contour (model size: 20MB) to the list of APIs that support this functionality.

// Face detection / Face contour model
// Delivered via Google Play Services outside your app's APK…
implementation 'com.google.android.gms:play-services-mlkit-face-detection:16.0.0'

// …or bundled with your app's APK
implementation 'com.google.mlkit:face-detection:16.0.0'

Jetpack Lifecycle / CameraX support

Android Jetpack Lifecycle support has been added to all APIs. Developers can use addObserver to automatically manage teardown of ML Kit APIs as the app goes through screen rotation or closure by the user / system. This makes CameraX integration easier. With this release, we are also recommending that developers adopt CameraX in their apps due to the ease of integration and image quality improvements (compared to Camera1) on a wide range of devices.

// ML Kit now supports Lifecycle
val recognizer = TextRecognizer.newInstance()
lifecycle.addObserver(recognizer)

// ...

// Just like CameraX
val camera = cameraProvider.bindToLifecycle( /* lifecycleOwner= */this,
    cameraSelector, previewUseCase, analysisUseCase)

For an overview of all recent changes, check out the release notes for the new SDK.

Codelab of the day - ML Kit x CameraX

To help you get started with the new ML Kit and its support for CameraX, we have created this code lab to Recognize, Identify Language and Translate text. If you have any questions regarding this code lab, please raise them at StackOverflow and tag it with [google-mlkit]. Our team will monitor this.

screenshot of app running

Early access program

Through our early access program, developers have an opportunity to partner with the ML Kit team and get access to upcoming features. Two new APIs are now available as part of this program:

  • Entity Extraction - Detect entities in text & make them actionable. We have support for phone numbers, addresses, payment numbers, tracking numbers, date/time and more.
  • Pose Detection - Low-latency pose detection supporting 33 skeletal points, including hands and feet tracking.

If you are interested, head over to our early access page for details.

pose detection on man jumping rope

Tomorrow - Support for custom models

ML Kit's turn-key solutions are built to help you take common challenges. However, if you needed to have a more tailored solution, one that required custom models, you typically needed to build an implementation from scratch. To help, we are now providing the option to swap out the default Google models with a custom TensorFlow Lite model. We’re starting with the Image Labeling and Object Detection and Tracking APIs, that now support custom image classification models.

Tomorrow, we will dive a bit deeper into how to find or train a TensorFlow Lite model and use it either with ML Kit, or with Android Studio’s new ML binding functionality.

On-device machine learning solutions with ML Kit, now even easier to use

Posted by Christiaan Prins, Product Manager, ML Kit and Shiyu Hu, Tech Lead Manager, ML Kit

ML Kit logo

Two years ago at I/O 2018 we introduced ML Kit, making it easier for mobile developers to integrate machine learning into your apps. Today, more than 25,000 applications on Android and iOS make use of ML Kit’s features. Now, we are introducing some changes that will make it even easier to use ML Kit. In addition, we have a new feature and a set of improvements we’d like to discuss.

A new ML Kit SDK, fully focused on on-device ML

ML Kit API Overview

ML Kit's APIs are built to help you tackle common challenges in the Vision and Natural Language domains. We make it easy to recognize text, scan barcodes, track and classify objects in real-time, do translation of text, and more.

The original version of ML Kit was tightly integrated with Firebase, and we heard from many of you that you wanted more flexibility when implementing it in your apps. As a result, we are now making all the on-device APIs available in a new standalone ML Kit SDK that no longer requires a Firebase project. You can still use both ML Kit and Firebase to get the best of both products if you choose to.

With this change, ML Kit is now fully focused on on-device machine learning, giving you access to the unique benefits that on-device versus cloud ML offers:

  • It’s fast, unlocking real-time use cases- since processing happens on the device, there is no network latency. This means, we can do inference on a stream of images / video or multiple times a second on text strings.
  • Works offline - you can rely on our APIs even when the network is spotty or your app’s end-user is in an area without connectivity.
  • Privacy is retained: since all processing is performed locally, there is no need to send sensitive user data over the network to a server.

Naturally, you still get access to Google’s on-device models and processing pipelines, all accessible through easy-to-use APIs, and offered at no cost.

All ML Kit resources can now be found on our new website where we made it a lot easier to access sample apps, API reference docs and our community channels that are there to help you if you have questions.

Object detection & tracking gif Text recognition + Language ID + Translate gif

What does this mean if I already use ML Kit today?

If you are using ML Kit for Firebase’s on-device APIs in your app today, we recommend you to migrate to the new standalone ML Kit SDK to benefit from new features and updates. For more information and step-by-step instructions to update your app, please follow our Migration guide. The cloud-based APIs, model deployment and AutoML Vision Edge remain available through Firebase Machine Learning.

Shrink your app footprint with Google Play Services

Apart from making ML Kit easier to use, developers also asked if we can ship ML Kit through Google Play Services resulting in a smaller app footprint and the model can be reused between apps. Apart from Barcode scanning and Text recognition, we have now added Face detection / contour (model size: 20MB) to the list of APIs that support this functionality.

// Face detection / Face contour model
// Delivered via Google Play Services outside your app's APK…
implementation 'com.google.android.gms:play-services-mlkit-face-detection:16.0.0'

// …or bundled with your app's APK
implementation 'com.google.mlkit:face-detection:16.0.0'

Jetpack Lifecycle / CameraX support

Android Jetpack Lifecycle support has been added to all APIs. Developers can use addObserver to automatically manage teardown of ML Kit APIs as the app goes through screen rotation or closure by the user / system. This makes CameraX integration easier. With this release, we are also recommending that developers adopt CameraX in their apps due to the ease of integration and image quality improvements (compared to Camera1) on a wide range of devices.

// ML Kit now supports Lifecycle
val recognizer = TextRecognizer.newInstance()
lifecycle.addObserver(recognizer)

// ...

// Just like CameraX
val camera = cameraProvider.bindToLifecycle( /* lifecycleOwner= */this,
    cameraSelector, previewUseCase, analysisUseCase)

For an overview of all recent changes, check out the release notes for the new SDK.

Codelab of the day - ML Kit x CameraX

To help you get started with the new ML Kit and its support for CameraX, we have created this code lab to Recognize, Identify Language and Translate text. If you have any questions regarding this code lab, please raise them at StackOverflow and tag it with [google-mlkit]. Our team will monitor this.

screenshot of app running

Early access program

Through our early access program, developers have an opportunity to partner with the ML Kit team and get access to upcoming features. Two new APIs are now available as part of this program:

  • Entity Extraction - Detect entities in text & make them actionable. We have support for phone numbers, addresses, payment numbers, tracking numbers, date/time and more.
  • Pose Detection - Low-latency pose detection supporting 33 skeletal points, including hands and feet tracking.

If you are interested, head over to our early access page for details.

pose detection on man jumping rope

Tomorrow - Support for custom models

ML Kit's turn-key solutions are built to help you take common challenges. However, if you needed to have a more tailored solution, one that required custom models, you typically needed to build an implementation from scratch. To help, we are now providing the option to swap out the default Google models with a custom TensorFlow Lite model. We’re starting with the Image Labeling and Object Detection and Tracking APIs, that now support custom image classification models.

Tomorrow, we will dive a bit deeper into how to find or train a TensorFlow Lite model and use it either with ML Kit, or with Android Studio’s new ML binding functionality.