Tag Archives: on-device

How to bring your AI Model to Android devices

Posted by Kateryna Semenova – Senior Developer Relations Engineer and Mark Sherwood – Senior Product Manager

During AI on Android Spotlight Week, we're diving into how you can bring your own AI model to Android-powered devices such as phones, tablets, and beyond. By leveraging the tools and technologies available from Google and other sources, you can run sophisticated AI models directly on these devices, opening up exciting possibilities for better performance, privacy, and usability.

Understanding on-device AI

On-device AI involves deploying and executing machine learning or generative AI models directly on hardware devices, instead of relying on cloud-based servers. This approach offers several advantages, such as reduced latency, enhanced privacy, cost saving and less dependence on internet connectivity.

For generative text use cases, explore Gemini Nano that is now available in experimental access through its SDK. For many on-device AI use cases, you might want to package your own models in your app. Today we will walk through how to do so on Android.

Key resources for on-device AI

The Google AI Edge platform provides a comprehensive ecosystem for building and deploying AI models on edge devices. It supports various frameworks and tools, enabling developers to integrate AI capabilities seamlessly into their applications. The Google AI Edge platforms consists of:

    • MediaPipe Tasks - Cross-platform low-code APIs to tackle common generative AI, vision, text, and audio tasks
    • LiteRT (formerly known as TensorFlow Lite) - Lightweight runtime for deploying custom machine learning models on Android
    • MediaPipe Framework - Pipeline framework for chaining multiple ML models along with pre and post processing logic


Google AI Edge Logo

How to build custom AI features on Android

    1. Define your use case: Before diving into technical details, it's crucial to clearly define what you want your AI feature to achieve. Whether you're aiming for image classification, natural language processing, or another application, having a well-defined goal will guide your development process.

    2. Choose the right tools and frameworks: Depending on your use case, you might be able to use an out of the box solution or you might need to create or source your own model. Look through MediaPipe Tasks for common solutions such as gesture recognition, image segmentation or face landmark detection. If you find a solution that aligns with your needs, you can proceed directly to the testing and deployment step.


Google AI Edge Logo

    If you need to create or source a custom model for your use case, you will need an on-device ML framework such as LiteRT (formerly TensorFlow Lite). LiteRT is designed specifically for mobile and edge devices and provides a lightweight runtime for deploying machine learning models. Simply follow these substeps:

        a. Develop and train your model: Develop your AI model using your chosen framework. Training can be performed on a powerful machine or cloud environment, but the model should be optimized for deployment on a device. Techniques like quantization and pruning can help reduce the model size and improve inference speed. Model Explorer can help understand and explore your model as you're working with it.

        b. Convert and optimize the model: Once your model is trained, convert it to a format suitable for on-device deployment. LiteRT, for example, requires conversion to its specific format. Optimization tools can help reduce the model’s footprint and enhance performance. AI Edge Torch allows you to convert PyTorch models to run locally on Android and other platforms, using Google AI Edge LiteRT and MediaPipe Tasks libraries.

        c. Accelerate your model: You can speed up model inference on Android by using GPU and NPU. LiteRT’s GPU delegate allows you to run your model on GPU today. We’re working hard on building the next generation of GPU and NPU delegates that will make your models run even faster, and enable more models to run on GPU and NPU. We’d like to invite you to participate in our early access program to try out this new GPU and NPU infrastructure. We will select participants out on a rolling basis so don’t wait to reach out.

    3. Test and deploy: To ensure that your model delivers the expected performance across various devices, rigorous testing is crucial. Deploy your app to users after completing the testing phase, offering them a seamless and efficient AI experience. We're working on bringing the benefits of Google Play and Android App Bundles to delivering custom ML models for on-device AI features. Play for On-device AI takes the complexity out of launching, targeting, versioning, downloading, and updating on-device models so that you can offer your users a better user experience without compromising your app's size and at no additional cost. Complete this form to express interest in joining the Play for On-device AI early access program.

Build trust in AI through privacy and transparency

With the growing role of AI in everyday life, ensuring models run as intended on devices is crucial. We're emphasizing a "zero trust" approach, providing developers with tools to verify device integrity and user control over their data. In the zero trust approach, developers need the ability to make informed decisions about the device's trustworthiness.

The Play Integrity API is recommended for developers looking to verify their app, server requests, and the device environment (and, soon, the recency of security updates on the device). You can call the API at important moments before your app’s backend decides to download and run your models. You can also consider turning on integrity checks for installing your app to reduce your app’s distribution to unknown and untrusted environments.

Play Integrity API makes use of Android Platform Key Attestation to verify hardware components and generate integrity verdicts across the fleet, eliminating the need for most developers to directly integrate different attestation tools and reducing device ecosystem complexity. Developers can use one or both of these tools to assess device security and software integrity before deciding whether to trust a device to run AI models.

Conclusion

Bringing your own AI model to a device involves several steps, from defining your use case to deploying and testing the model. With resources like Google AI Edge, developers have access to powerful tools and insights to make this process smoother and more effective. As on-device AI continues to evolve, leveraging these resources will enable you to create cutting-edge applications that offer enhanced performance, privacy, and user experience. We are currently seeking early access partners to try out some of our latest tools and APIs at Google AI Edge. Simply fill in this form to connect and explore how we can work together to make your vision a reality.

Dive into these resources and start exploring the potential of on-device AI—your next big innovation could be just a model away!

Use #AndroidAI hashtag to share your feedback or what you've built on social media and catch up with the rest of the updates being shared during Spotlight Week: AI on Android.

Gemini Nano is now available on Android via experimental access

Posted by Taj Darra – Product Manager

Gemini, introduced last year, is Google’s most capable family of models yet; designed for flexibility, it can run on everything from data centers to mobile devices. Since announcing Gemini Nano, our most efficient model built for on-device tasks, we've been working with a limited set of partners to support a range of use cases for their apps.

Today, we’re opening up access to experiment with Gemini Nano to all Android developers with the AI Edge SDK via AICore. Developers will initially have access to experiment with text-to-text prompts on Pixel 9 series devices. Support for more devices and modalities will be added in the future. Check out our documentation and video to get started. Note that experimental access is for development purposes, and is not for production usage at this time.


Fast, private and cost-effective on-device AI

On-device generative AI processes prompts directly on your device without server calls. It offers many benefits: sensitive user data is processed locally on the device, full functionality without internet connectivity, and no additional monetary cost for each inference.

Since on-device generative AI models run on devices with less computational power than cloud servers, they are significantly smaller and less generalized than their cloud-based equivalents. As a result, the model works best for tasks where the requests can be clearly specified rather than open-ended use cases such as chatbots. Here are some use cases you can try:

    • Rephrasing - Rephrasing and rewriting text to change the tone to be more casual or formal.
    • Smart reply - Given several chat messages in a thread, suggest the next likely response.
    • Proofreading - Removing spelling or grammatical errors from text.
    • Summarization - Generating a summary of a long document, either as a paragraph or as bullet points.

Check out our prompting strategies to achieve best results when experimenting with the above use-cases. If you want to test your own use case, you can download our sample app for an easy way to start experimenting with Gemini Nano.


Gemini Nano performance and usage

Compared to its predecessor, the model being made available to developers today (referred to in the academic paper as “Nano 2”) delivers a substantial improvement in quality. At nearly twice the size of the predecessor (“Nano 1”), it excels in both academic benchmarks and real-world applications, offering capabilities that rival much larger models.


MMLU (5-shot)*

MATH (4-shot)*

Paraphrasing**

Smart Reply**

Nano 1

46%

14%

44%

44%

Nano 2

56%

23%

90%

82%

* As reported in Gemini: A Family of Highly Capable Multimodal Models. Note that both these models are a part of our Gemini 1.0 series.
** Percentage of good answers measured on public datasets via an autorater powered by Gemini 1.5 Pro.

Gemini Nano is already in use by Google apps. Pixel Screenshots, Talkback, Recorder and many more have leveraged Gemini Nano’s text and image understanding to deliver new experiences:

    • Talkback - Android’s accessibility app leverages Gemini Nano’s multimodal capabilities to improve image descriptions for blind and low vision users.
    moving image of Talkback app UI highlighting improved image descriptions with multimodality model for users with low vision

    • Pixel Recorder - Gemini Nano with Multimodality model enables support for longer recordings and higher quality summaries.
moving image of Talkback app UI highlighting improved image descriptions with multimodality model for users with low vision

Seamless model integration with AI Edge SDK using AICore

Integrating generative AI models directly into mobile apps is challenging due to the significant computational resources and storage space they require. To address this challenge, we developed AICore, a new system service in Android. AICore allows you to benefit from AI running directly on the device without needing to distribute runtimes, models and other components yourself.

To run inference with Gemini Nano in AICore, you use the AI Edge SDK. The AI Edge SDK enables developers to customize prompts and inference parameters to their specific needs, enabling greater control over each inference.

To experiment with the AI Edge SDK, add the following to your apps’ dependency:

implementation("com.google.ai.edge.aicore:aicore:0.0.1-exp01")

The AI Edge SDK allows you to customize inference parameters. Some of the more commonly-used parameters include:

    • Temperature, which controls randomness. Higher values increase diversity and creativity of output.
    • Top K, which specifies how many tokens from the highest-ranking ones are to be considered.
    • Candidate count, which describes the maximum number of responses to return.
    • Max output tokens, which is the length of the desired response.

When you are ready to run the inference with your model, the AI Edge SDK offers an easy way to pass in multiple strings as input to accommodate long inference data.

Here’s an example:

scope.launch {
    // Single string input prompt
    val input = "I want you to act as an English proofreader. I will 
    provide you texts, and I would like you to review them for any 
    spelling, grammar, or punctuation errors. Once you have finished 
    reviewing the text, provide me with any necessary corrections or 
    suggestions for improving the text: 
    These arent the droids your looking for."
    val response = generativeModel.generateContent(input)
    print(response.text)

    // Or multiple strings as input
    val response = generativeModel.generateContent(
        content {
            text("I want you to act as an English proofreader.I will 
            provide you texts and I would like you to review them for 
            any spelling, grammar, or punctuation errors.")
            text("Once you have finished reviewing the text, 
            provide me with any necessary corrections or suggestions 
            for improving the text:")
            text("These arent the droids your looking for.")
        }
    )
    print(response.text)
}

Our integration guide has more information on the AI Edge SDK as well as detailed instructions to start your experimentation with Gemini Nano. To learn more about prompting, check out the Gemini prompting strategies.


Get Started

Learn more about Gemini Nano for app development by watching our video walkthrough, and try out Gemini Nano experimental access in your own app today.

We are excited to see what you build and welcome your input as you evaluate this new technology for your use cases! Post your creations on social media and include the hashtag #AndroidAI to share what you build. To share your ideas and feedback for on-device GenAI and help shape our APIs, you can file a ticket.

There’s a lot more that we’re covering this week for you to build great AI experiences on Android so be sure to check out the rest of the AI on Android Spotlight Week content!