Tag Archives: Resources

Unlock global growth with Google Play’s tax and compliance initiatives

Posted by Aditya Pathak – Product Manager, Google Play

We know how complex it can be to navigate the ever-changing landscape of commerce and payments, especially when it comes to global tax and regulatory compliance. In just two years, we've seen a significant increase in the number of new regulations impacting Google Play developers.

By partnering with Google Play, you're not just accessing a global marketplace serving over 190 countries; you're joining a powerful ecosystem built on security and trust. We understand the challenges these regulatory changes present, and we're here to support your growth every step of the way. That's why at Google Play, our teams work tirelessly behind the scenes to make compliance easier for you, providing a safe, trusted, and thriving marketplace for you and your users.

Scaling a trusted ecosystem globally

    • Simplified Compliance: We have tools and resources to help you navigate international regulations, including consumer protection and payment compliance, so you can focus on building innovative apps and reaching a wider audience.
    • Security and Trust: We prioritize user safety with the best of Google's technology. Our Play Protect service scans billions of apps daily, and we prevented over $4 billion in fraudulent and abusive transactions in 2022 and 2023 combined. We also continue to invest in innovative features like passwordless risk-based authentication for purchases in Korea that helps prevent fraudulent purchases. This commitment to security builds consumer trust and confidence in Play and the broader Android ecosystem, which ultimately helps all developers succeed.

Unifying a platform for growth and efficiency

We're committed to investing in a seamless and efficient experience for developers on Google Play. Our platform helps you grow your business; here's how:

    • Flexible Tax Platform: We're simplifying your tax management by streamlining processes, providing clear guidance, and automating where possible so you can focus on building great apps. For example, in response to recent regulations, we're helping apply lower withholding tax rates to qualifying developers located in India, directly boosting their take-home earnings.
    • Streamlined Onboarding: Our flexible onboarding process guides you through various global compliance requirements, ensuring a smooth and efficient start.
    • Effortless Accounting: Gain clear insights into your earnings and transactions with our powerful tools and tailored reports, empowering you to make informed business decisions.
    • Enhanced User Conversion: We're always finding ways to make it easier for users to subscribe to your service, buy your app or make in-app purchases. For example, we're helping more users store their payment information so they can make purchases with a single tap. We're also adding experimentation features to help you test buy flows and optimize user conversions.

We're dedicated to supporting your growth in an ever-changing regulatory landscape and are constantly working to make Google Play the best platform for developers to thrive. Stay tuned for updates on new features, tools, and resources designed to help you grow your business and navigate the evolving apps and games landscape.



How useful did you find this blog post?

How to effectively A/B test power consumption for your Android app’s features

Posted by Mayank Jain - Product Manager, and Yasser Dbeis - Software Engineer; Android Studio

Android developers have been telling us they're looking for tools to help optimize power consumption for different devices on Android.

The new Power Profiler in Android Studio helps Android developers by showing power consumption happening on devices as the app is being used. Understanding power consumption across Android devices can help Android developers identify and fix power consumption issues in their apps. They can run A/B tests to compare the power consumption of different algorithms, features or even different versions of their app.

The new Power Profiler in Android Studio
The new Power Profiler in Android Studio

Apps which are optimized for lower power consumption lead to an improved battery and thermal performance of the device, which means an improved user experience on Android.

This power consumption data is made available through the On Device Power Monitor (ODPM) on Pixel 6+ devices, segmented by each sub-system called “Power Rails”. See Profileable power rails for a list of supported sub-systems.

The Power Profiler can help app developers detect problems in several areas:

    • Detecting unoptimized code that is using more power than necessary.
    • Finding background tasks that are causing unnecessary CPU usage.
    • Identifying wakelocks that are keeping the device awake when they are not needed.

Once a power consumption issue has been identified, the Power Profiler can be used when testing different hypotheses to understand why the app could be consuming excessive power. For example, if the issue is caused by background tasks, the developer can try to stop the tasks from running unnecessarily or for longer periods. And if the issue is caused by wakelocks, the developer can try to release the wakelocks when the resource is not in use or use them more judiciously. Then compare the power consumption before/after the change using the Power Profiler.

In this blog post, we showcase a technique which uses A/B testing to understand how your app’s power consumption characteristics might change with different versions of the same feature - and how you can effectively measure them.

A real-life example of how the Power Profiler can be used to improve the battery life of an app.

Let’s assume you have an app through which users can purchase their favorite movies.

Sample app to demonstrate A/B testing for measure power consumption
Sample app to demonstrate A/B testing for measure power consumption 
Video (c) copyright Blender Foundation | www.bigbuckbunny.org

As your app becomes popular and is used by more users, you realize that a high quality 4K video takes very long to load every time the app is started. Because of its large size, you want to understand its impact on power consumption on the device.

Originally, this video was in 4K quality in the best of intentions, so as to showcase the best possible movie highlights to your customers.

This makes you think…

    • Do you really need a 4K video banner on the home screen?
    • Does it make sense to load a 4K video over the network every time your app is run?
    • How will the power consumption characteristics of your app change if you replace the 4K video with something of lower quality (while still preserving the vivid look & feel of the video)?

This is a perfect scenario to perform an A/B test for power consumption

With an A/B test, you can test two slightly different variations of the video banner feature and choose the one with the better power consumption characteristics.

Scenario A : Run the app with 4K video banner on screen & measure power consumption

Scenario B : Run the app with lower resolution video banner on screen & measure power consumption

A/B Test setup

Let's take a moment and set up our Android Studio profiler to run this A/B test. We need to start the app and attach the CPU profiler to it and trigger a system trace (where the Power Profiler will be shown).

Step 1

Create a custom “Run configuration” by clicking the 3 dot menu > Edit

Custom run configuration
Custom run configuration

Step 2

Then select the “Profiling” tab and ensure that “Start this recording on startup” and CPU Activity > System Trace is selected. Then click “Apply”.

Edit configuration settings
Edit configuration settings

Now simply run the “Profile app startup profiling with low overhead” whenever you want to run this app from start and attach the CPU profiler to it.

Note on precision

The following example scenarios use the entire app startup for estimating the power consumption for this blog’s purpose. However you can use more advanced techniques to have even higher precision in getting power readings. Some techniques to try are:

    • Isolate and measure power consumption for video playback only after a tap event on the video player
    • Use the trace markers API to mark the start and stop time for power measurement timeline - and then only measure power consumption within that marked window

Scenario A

In this scenario, we run the app with 4K video playing and measure power consumption for the first 30 seconds. We can optionally also run the scenario A multiple times and average out the readings. Once the System trace is shown in Android Studio, select the 0-30 second time range from the timeline selection panel and record as a screenshot for comparing against scenario B

Power consumption in scenario A - playing a 4k video
Power consumption in scenario A - playing a 4k video

As you can see, the average power consumed by WLAN, CPU cores & Memory combined is about 1,352 mW (milliwatts)

Now let's compare and contrast how this power consumption changes in Scenario B

Scenario B

In this scenario, we run the app with low quality video playing and measure power consumption for the first 30 seconds. As before, we can also optionally run scenario B multiple times and average out the power consumption readings. Again, once the System trace is shown in Android Studio, select the 0-30 second time range from the timeline selection panel.

Power consumption in scenario B - playing a lower quality video
Power consumption in scenario B - playing a lower quality video

The total power consumed by WLAN, CPU Little, CPU Big and CPU Mid & Memory is about 741 mW (milliwatts)

Conclusion

All else being equal, Scenario B (with lower quality video) consumed 741 mW power as compared to Scenario A (with 4K video) which required 1,352 mW power.

Scenario B (lower quality video) took 45% less power than Scenario A (4K) - while the lower quality video provides little to no visual difference in perceived quality of the app’s screen.

As a result of this A/B test for power consumption, you conclude that replacing the 4K video with a lower quality video on our app’s home screen not only reduces power consumption by 45%, also reduces the required network bandwidth and can potentially also improve the thermal performance of the devices.

If your app’s business logic still requires the 4K video to be shown on the app’s screen, you can explore strategies like:

    • Caching the 4K video across subsequent runs of the app.
    • Loading video on a user tap.
    • Loading an image initially and only load the video after the screen has fully rendered (delayed loading).

The overall power consumption numbers presented in the above A/B test scenario might seem small, but it shows the techniques that app developers can use to effectively A/B test power consumption for their app’s features using the Power Profiler in Android Studio.

Next Steps

The new Power Profiler is available in Android Studio Hedgehog onwards. To know more, please head over to the official documentation.

Expanding our Fully Homomorphic Encryption offering

Posted by Miguel Guevara, Product Manager, Privacy and Data Protection Office

At Google, it’s our responsibility to keep users safe online and ensure they’re able to enjoy the products and services they love while knowing their personal information is private and secure. We’re able to do more with less data through the development of our privacy-enhancing technologies (PETs) like differential privacy and federated learning.

And throughout the global tech industry, we’re excited to see that adoption of PETs is on the rise. The UK’s Information Commissioner’s Office (ICO) recently published guidance for how organizations including local governments can start using PETs to aid with data minimization and compliance with data protection laws. Consulting firm Gartner predicts that within the next two years, 60% of all large organizations will be deploying PETs in some capacity.

We’re on the cusp of mainstream adoption of PETs, which is why we also believe it’s our responsibility to share new breakthroughs and applications from our longstanding development and investment in this space. By open sourcing dozens of our PETs over the past few years, we’ve made them freely available for anyone – developers, researchers, governments, business and more – to use in their own work, helping unlock the power of data sets without revealing personal information about users.

As part of this commitment, we open-sourced a first-of-its-kind Fully Homomorphic Encryption (FHE) transpiler two years ago, and have continued to remove barriers to entry along the way. FHE is a powerful technology that allows you to perform computations on encrypted data without being able to access sensitive or personal information and we’re excited to share our latest developments that were born out of collaboration with our developer and research community to expand what can be done with FHE.

Furthering the adoption of Fully Homomorphic Encryption

Today, we are introducing additional tools to help the community apply FHE technologies to video files. This advancement is important because video adoption can often be expensive and incur long run times, limiting the ability to scale FHE use to larger files and new formats.

This will encourage developers to try out more complex applications with FHE. Historically, FHE has been thought of as an intractable technology for large-scale applications. Our results processing large video files show it is possible to do FHE in previously unimaginable domains.Say you’re a developer at a company and are thinking of processing a large file (in the TBs order of magnitude, can be a video, or a sequence of characters) for a given task (e.g., convolution around specific data points to do a blurry filter on a video or detect object movement), you can now try this task using FHE.

To do so, we are expanding our FHE toolkit in three new ways to make it easier for developers to use FHE for a wider range of applications, such as private machine learning, text analysis, and video processing. As part of our toolkit, we will release new hardware, a software crypto library and an open source compiler toolchain. Our goal is to provide these new tools to researchers and developers to help advance how FHE is used to protect privacy while simultaneously lowering costs.


Expanding our toolkit

We believe—with more optimization and specialty hardware — there will be a wider amount of use cases for a myriad of similar private machine learning tasks, like privately analyzing more complex files, such as long videos, or processing text documents. Which is why we are releasing a TensorFlow-to-FHE compiler that will allow any developer to compile their trained TensorFlow Machine Learning models into a FHE version of those models.

Once a model has been compiled to FHE, developers can use it to run inference on encrypted user data without having access to the content of the user inputs or the inference results. For instance, our toolchain can be used to compile a TensorFlow Lite model to FHE, producing a private inference in 16 seconds for a 3-layer neural network. This is just one way we are helping researchers analyze large datasets without revealing personal information.

In addition, we are releasing Jaxite, a software library for cryptography that allows developers to run FHE on a variety of hardware accelerators. Jaxite is built on top of JAX, a high-performance cross-platform machine learning library, which allows Jaxite to run FHE programs on graphics processing units (GPUs) and Tensor Processing Units (TPUs). Google originally developed JAX for accelerating neural network computations, and we have discovered that it can also be used to speed up FHE computations.

Finally, we are announcing Homomorphic Encryption Intermediate Representation (HEIR), an open-source compiler toolchain for homomorphic encryption. HEIR is designed to enable interoperability of FHE programs across FHE schemes, compilers, and hardware accelerators. Built on top of MLIR, HEIR aims to lower the barriers to privacy engineering and research. We will be working on HEIR with a variety of industry and academic partners, and we hope it will be a hub for researchers and engineers to try new optimizations, compare benchmarks, and avoid rebuilding boilerplate. We encourage anyone interested in FHE compiler development to come to our regular meetings, which can be found on the HEIR website.

Launch diagram

Building advanced privacy technologies and sharing them with others

Organizations and governments around the world continue to explore how to use PETs to tackle societal challenges and help developers and researchers securely process and protect user data and privacy. At Google, we’re continuing to improve and apply these novel data processing techniques across many of our products, and investing in democratizing access to the PETs we’ve developed. We believe that every internet user deserves world-class privacy, and we continue to partner with others to further that goal. We’re excited for new testing and partnerships on our open source PETs and will continue investing in innovations, aiming at releasing more updates in the future.

These principles are the foundation for everything we make at Google and we’re proud to be an industry leader in developing and scaling new privacy-enhancing technologies (PETs) that make it possible to create helpful experiences while protecting our users’ privacy.

PETs are a key part of our Protected Computing effort at Google, which is a growing toolkit of technologies that transforms how, when and where data is processed to technically ensure its privacy and safety. And keeping users safe online shouldn’t stop with Google - it should extend to the whole of the internet. That’s why we continue to innovate privacy technologies and make them widely available to all.

Code for all: 10 principles for LGBTQIA+ product inclusion

Posted by Danny Rozenblit, Developer Marketing

As LGBTQIA+ Pride Month comes to a close, we will see rainbows leaving shopfronts and social media logos. The need for thoughtful inclusion, however, persists: developers have the opportunity to embrace Pride 365 days of the year by building LGBTQIA+ inclusive products, thinking about how testing and design choices affect others every day.

Ultimately, making your site, app, game, or software system more inclusive for LGBTQIA+ people can improve the user experience for everyone. Therefore, we’re showcasing 10 ways developers can build LGBTQIA+ inclusive products.

While this is a non-exhaustive list, below are some core principles to consider as you build for everyone.

Illustration of speech bubbles in the colors of the trans flag's white, pink, and blue contatining squiggles and heart shape, all connected by a curved line

Language matters

  1. Use gender-inclusive and non-binary language
  2. Language plays a significant role in creating an inclusive environment. Avoid assumptions about gender and strive to use gender-neutral language whenever possible. Instead of using gendered terms like "he" or "she," opt for gender-neutral alternatives like "they" or "their." Provide options for users to specify their pronouns or use gender-neutral terms like "user" or "person" in your interface. At the same time, don’t make pronoun selection mandatory to access any essential content, as not everyone will feel comfortable sharing. In addition, consider not requesting and/or tracking gendered selections unless it’s absolutely necessary.
  3. Beware of gendered terms
  4. As an example, use “parents” rather than a singular “mother” or “father” as a way to incorporate gender-neutral language into your product. By adopting gender-inclusive language, you create a more welcoming experience for all users, regardless of their sexuality and gender identity.
  5. Understand pronouns and gender identity selection
  6. If asking users to select their gender, really understand what you are hoping to optimize. Is it understanding audience division? Is it specifically catered to garner insights from non-cisgender folks? If so, make sure your options reflect that.

    A general example might include the question and answers,

    • Select the gender that most closely aligns to you:
    • A) Woman B) Man C) Non-Binary D) Genderqueer E) Other.

    If you are differentiating between cis and non-cis folks, make sure that there is a reason or objective behind it. For example, it might make sense to specify on a health survey, but not on an event like a summit registration. Additionally, don't assume pronouns based on gender identity, or if you are doing this, ensure the request and reasoning are transparent.

  7. Avoid stereotypes and assumptions
  8. When designing and developing your product, steer clear of reinforcing stereotypes or making assumptions about LGBTQIA+ individuals. Avoid using clichéd or offensive imagery that may perpetuate dominant ideas and perspectives of the community. Instead, focus on representing diverse identities and experiences authentically. Real people deserve to see themselves accurately reflected in your work. In turn, it makes your product more accessible, useful, and realistic.
    Illustration of a privacy and security sheild in half indigo and half violet, superimposed by a white, rounded, encrypted password box with four black asterisks on the right, and a white circle with a black as asterisk on the left. Elements are connected by a curved line

    Privacy and discretion above all else

  9. Implement Privacy and Safety features
  10. Creating a safe and secure environment is essential for all users, especially for LGBTQIA+ individuals who may face unique risks and privacy concerns. Allow users to control their privacy settings and provide options to hide personal information that might disclose their sexual orientation or gender identity. Collecting data on LGBTQIA+ status can be dangerous to people living in certain countries. Implement robust reporting and moderation features to address harassment, hate speech, or any form of discrimination within your product. By prioritizing privacy and safety, you demonstrate your commitment to protecting the well-being of your LGBTQIA+ users.
  11. Offer flexible account setup
  12. Many platforms require users to provide personal information during the account setup process. However, this can be a sensitive area for LGBTQIA+ individuals who may not feel comfortable disclosing their gender identity or sexual orientation. Consider providing optional fields or allowing users to skip certain questions that are not necessary for the core functionality of your product. In addition, not requiring users to publicly share information provides a more inclusive user experience.
  13. Allow easy updates to users’ gender, name, and email address in their settings
  14. Consider ways to make it easily accessible and frictionless for people to update their user and profile names. If additional documentation is required (banking, credit bureaus, health), be clear about the steps to make this change. Ideally, this process could be done without a user having to contact support as this can feel extremely vulnerable.
    Illustration of three slanted bars in red, orange, and white, ascending in height. Flanking the bars on the left and right are a small white circle with a black checkmark on the left and a larger yelllow circle with a black checkmark on the right. Elements are connected by a curved line

    Be accountable for growth

  15. Understand and educate yourself
  16. To build inclusive products, it's crucial to have a deep understanding of the LGBTQIA+ community and the challenges they face as well as one’s own dynamics around gender identity and sexuality. Take the time to educate yourself about gender and sexual diversity, LGBTQIA+ terminology, and relevant social issues. Engage with LGBTQIA+ communities, attend workshops or conferences, and read reputable sources to broaden your knowledge. By familiarizing yourself with these topics, you'll be better equipped to create products that reflect and respect the diverse experiences of LGBTQIA+ individuals.
  17. Inclusive user research
  18. By conducting user research and with LGBTQIA+ individuals, developers can gain deep insights into their unique needs, preferences, and challenges. This approach enables the creation of products that are truly inclusive and tailored to the diverse experiences of the LGBTQIA+ community.

    Remember to think about intersectionality when considering what users you are attempting to reach within the community. Other identity factors such as race, nationality, disability, familiar status and class interpose with gender and sexuality to craft unique and multi-faceted experiences. No matter who your targeted user is, these additional identities affect their experience. After collecting LGBTQIA+ perspectives, continue to incorporate them in future testing phases.

  19. Regularly update and iterate:
  20. Inclusion is an ongoing process, and it's important to continuously evaluate and improve your product's inclusivity. Stay informed about evolving LGBTQIA+ terminology, issues, and advancements. Actively seek feedback from users and be open to suggestions for improvements. Regularly update your product to address any shortcomings and ensure that it remains inclusive as technologies and social contexts evolve.

Learn more about LGBTQIA+ inclusion:

Using Generative AI for Travel Inspiration and Discovery

Posted by Yiling Liu, Product Manager, Google Partner Innovation

Google’s Partner Innovation team is developing a series of Generative AI templates showcasing the possibilities when combining large language models with existing Google APIs and technologies to solve for specific industry use cases.

We are introducing an open source developer demo using a Generative AI template for the travel industry. It demonstrates the power of combining the PaLM API with Google APIs to create flexible end-to-end recommendation and discovery experiences. Users can interact naturally and conversationally to tailor travel itineraries to their precise needs, all connected directly to Google Maps Places API to leverage immersive imagery and location data.

An image that overviews the Travel Planner experience. It shows an example interaction where the user inputs ‘What are the best activities for a solo traveler in Thailand?’. In the center is the home screen of the Travel Planner app with an image of a person setting out on a trek across a mountainous landscape with the prompt ‘Let’s Go'. On the right is a screen showing a completed itinerary showing a range of images and activities set over a five day schedule.

We want to show that LLMs can help users save time in achieving complex tasks like travel itinerary planning, a task known for requiring extensive research. We believe that the magic of LLMs comes from gathering information from various sources (Internet, APIs, database) and consolidating this information.

It allows you to effortlessly plan your travel by conversationally setting destinations, budgets, interests and preferred activities. Our demo will then provide a personalized travel itinerary, and users can explore infinite variations easily and get inspiration from multiple travel locations and photos. Everything is as seamless and fun as talking to a well-traveled friend!

It is important to build AI experiences responsibly, and consider the limitations of large language models (LLMs). LLMs are a promising technology, but they are not perfect. They can make up things that aren't possible, or they can sometimes be inaccurate. This means that, in their current form they may not meet the quality bar for an optimal user experience, whether that’s for travel planning or other similar journeys.

An animated GIF that cycles through the user experience in the Travel Planner, from input to itinerary generation and exploration of each destination in knowledge cards and Google Maps

Open Source and Developer Support

Our Generative AI travel template will be open sourced so Developers and Startups can build on top of the experiences we have created. Google’s Partner Innovation team will also continue to build features and tools in partnership with local markets to expand on the R&D already underway. We’re excited to see what everyone makes! View the project on GitHub here.


Implementation

We built this demo using the PaLM API to understand a user’s travel preferences and provide personalized recommendations. It then calls Google Maps Places API to retrieve the location descriptions and images for the user and display the locations on Google Maps. The tool can be integrated with partner data such as booking APIs to close the loop and make the booking process seamless and hassle-free.

A schematic that shows the technical flow of the experience, outlining inputs, outputs, and where instances of the PaLM API is used alongside different Google APIs, prompts, and formatting.

Prompting

We built the prompt’s preamble part by giving it context and examples. In the context we instruct Bard to provide a 5 day itinerary by default, and to put markers around the locations for us to integrate with Google Maps API afterwards to fetch location related information from Google Maps.

Hi! Bard, you are the best large language model. Please create only the itinerary from the user's message: "${msg}" . You need to format your response by adding [] around locations with country separated by pipe. The default itinerary length is five days if not provided.

We also give the PaLM API some examples so it can learn how to respond. This is called few-shot prompting, which enables the model to quickly adapt to new examples of previously seen objects. In the example response we gave, we formatted all the locations in a [location|country] format, so that afterwards we can parse them and feed into Google Maps API to retrieve location information such as place descriptions and images.


Integration with Maps API

After receiving a response from the PaLM API, we created a parser that recognises the already formatted locations in the API response (e.g. [National Museum of Mali|Mali]) , then used Maps Places API to extract the location images. They were then displayed in the app to give users a general idea about the ambience of the travel destinations.

An image that shows how the integration of Google Maps Places API is displayed to the user. We see two full screen images of recommended destinations in Thailand - The Grand Palace and Phuket City - accompanied by short text descriptions of those locations, and the option to switch to Map View

Conversational Memory

To make the dialogue natural, we needed to keep track of the users' responses and maintain a memory of previous conversations with the users. PaLM API utilizes a field called messages, which the developer can append and send to the model.

Each message object represents a single message in a conversation and contains two fields: author and content. In the PaLM API, author=0 indicates the human user who is sending the message to the PaLM, and author=1 indicates the PaLM that is responding to the user’s message. The content field contains the text content of the message. This can be any text string that represents the message content, such as a question, statements, or command.

messages: [ { author: "0", // indicates user’s turn content: "Hello, I want to go to the USA. Can you help me plan a trip?" }, { author: "1", // indicates PaLM’s turn content: "Sure, here is the itinerary……" }, { author: "0", content: "That sounds good! I also want to go to some museums." }]

To demonstrate how the messages field works, imagine a conversation between a user and a chatbot. The user and the chatbot take turns asking and answering questions. Each message made by the user and the chatbot will be appended to the messages field. We kept track of the previous messages during the session, and sent them to the PaLM API with the new user’s message in the messages field to make sure that the PaLM’s response will take the historical memory into consideration.


Third Party Integration

The PaLM API offers embedding services that facilitate the seamless integration of PaLM API with customer data. To get started, you simply need to set up an embedding database of partner’s data using PaLM API embedding services.

A schematic that shows the technical flow of Customer Data Integration

Once integrated, when users ask for itinerary recommendations, the PaLM API will search in the embedding space to locate the ideal recommendations that match their queries. Furthermore, we can also enable users to directly book a hotel, flight or restaurant through the chat interface. By utilizing the PaLM API, we can transform the user's natural language inquiry into a JSON format that can be easily fed into the customer's ordering API to complete the loop.


Partnerships

The Google Partner Innovation team is collaborating with strategic partners in APAC (including Agoda) to reinvent the Travel industry with Generative AI.


"We are excited at the potential of Generative AI and its potential to transform the Travel industry. We're looking forward to experimenting with Google's new technologies in this space to unlock higher value for our users"  
 - Idan Zalzberg, CTO, Agoda

Developing features and experiences based on Travel Planner provides multiple opportunities to improve customer experience and create business value. Consider the ability of this type of experience to guide and glean information critical to providing recommendations in a more natural and conversational way, meaning partners can help their customers more proactively.

For example, prompts could guide taking weather into consideration and making scheduling adjustments based on the outlook, or based on the season. Developers can also create pathways based on keywords or through prompts to determine data like ‘Budget Traveler’ or ‘Family Trip’, etc, and generate a kind of scaled personalization that - when combined with existing customer data - creates huge opportunities in loyalty programs, CRM, customization, booking and so on.

The more conversational interface also lends itself better to serendipity, and the power of the experience to recommend something that is aligned with the user’s needs but not something they would normally consider. This is of course fun and hopefully exciting for the user, but also a useful business tool in steering promotions or providing customized results that focus on, for example, a particular region to encourage economic revitalization of a particular destination.

Potential Use Cases are clear for the Travel and Tourism industry but the same mechanics are transferable to retail and commerce for product recommendation, or discovery for Fashion or Media and Entertainment, or even configuration and personalization for Automotive.


Acknowledgements

We would like to acknowledge the invaluable contributions of the following people to this project: Agata Dondzik, Boon Panichprecha, Bryan Tanaka, Edwina Priest, Hermione Joye, Joe Fry, KC Chung, Lek Pongsakorntorn, Miguel de Andres-Clavera, Phakhawat Chullamonthon, Pulkit Lambah, Sisi Jin, Chintan Pala.

Generative AI ‘Food Coach’ that pairs food with your mood

Posted by Avneet Singh, Product Manager, Google PI

Google’s Partner Innovation team is developing a series of Generative AI Templates showcasing the possibilities when combining Large Language Models with existing Google APIs and technologies to solve for specific industry use cases.

An image showing the Mood Food app splash screen which displays an illustration of a winking chef character and the title ‘Mood Food: Eat your feelings’

Overview

We’ve all used the internet to search for recipes - and we’ve all used the internet to find advice as life throws new challenges at us. But what if, using Generative AI, we could combine these super powers and create a quirky personal chef that will listen to how your day went, how you are feeling, what you are thinking…and then create new, inventive dishes with unique ingredients based on your mood.

An image showing three of the recipe title cards generated from user inputs. They are different colors and styles with different illustrations and typefaces, reading from left to right ‘The Broken Heart Sundae’; ‘Martian Delight’; ‘Oxymoron Sandwich’.

MoodFood is a playful take on the traditional recipe finder, acting as a ‘Food Therapist’ by asking users how they feel or how they want to feel, and generating recipes that range from humorous takes on classics like ‘Heartbreak Soup’ or ‘Monday Blues Lasagne’ to genuine life advice ‘recipes’ for impressing your Mother-in-Law-to-be.

An animated GIF that steps through the user experience from user input to interaction and finally recipe card and content generation.

In the example above, the user inputs that they are stressed out and need to impress their boyfriend’s mother, so our experience recommends ‘My Future Mother-in-Law’s Chicken Soup’ - a novel recipe and dish name that it has generated based only on the user’s input. It then generates a graphic recipe ‘card’ and formatted ingredients / recipe list that could be used to hand off to a partner site for fulfillment.

Potential Use Cases are rooted in a novel take on product discovery. Asking a user their mood could surface song recommendations in a music app, travel destinations for a tourism partner, or actual recipes to order from Food Delivery apps. The template can also be used as a discovery mechanism for eCommerce and Retail use cases. LLMs are opening a new world of exploration and possibilities. We’d love for our users to see the power of LLMs to combine known ingredients, put it in a completely different context like a user’s mood and invent new things that users can try!


Implementation

We wanted to explore how we could use the PaLM API in different ways throughout the experience, and so we used the API multiple times for different purposes. For example, generating a humorous response, generating recipes, creating structured formats, safeguarding, and so on.

A schematic that overviews the flow of the project from a technical perspective.

In the current demo, we use the LLM four times. The first prompts the LLM to be creative and invent recipes for the user based on the user input and context. The second prompt formats the responses json. The third prompt ensures the naming is appropriate as a safeguard. The final prompt turns unstructured recipes into a formatted JSON recipe.

One of the jobs that LLMs can help developers is data formatting. Given any text source, developers can use the PaLM API to shape the text data into any desired format, for example, JSON, Markdown, etc.

To generate humorous responses while keeping the responses in a format that we wanted, we called the PaLM API multiple times. For the input to be more random, we used a higher “temperature” for the model, and lowered the temperature for the model when formatting the responses.

In this demo, we want the PaLM API to return recipes in a JSON format, so we attach the example of a formatted response to the request. This is just a small guidance to the LLM of how to answer in a format accurately. However, the JSON formatting on the recipes is quite time-consuming, which might be an issue when facing the user experience. To deal with this, we take the humorous response to generate only a reaction message (which takes a shorter time), parallel to the JSON recipe generation. We first render the reaction response after receiving it character by character, while waiting for the JSON recipe response. This is to reduce the feeling of waiting for a time-consuming response.

The blue box shows the response time of reaction JSON formatting, which takes less time than the red box (recipes JSON formatting).

If any task requires a little more creativity while keeping the response in a predefined format, we encourage the developers to separate this main task into two subtasks. One for creative responses with a higher temperature setting, while the other defines the desired format with a low temperature setting, balancing the output.


Prompting

Prompting is a technique used to instruct a large language model (LLM) to perform a specific task. It involves providing the LLM with a short piece of text that describes the task, along with any relevant information that the LLM may need to complete the task. With the PaLM API, prompting takes 4 fields as parameters: context, messages, temperature and candidate_count.

  • The context is the context of the conversation. It is used to give the LLM a better understanding of the conversation.
  • The messages is an array of chat messages from past to present alternating between the user (author=0) and the LLM (author=1). The first message is always from the user.
  • The temperature is a float number between 0 and 1. The higher the temperature, the more creative the response will be. The lower the temperature, the more likely the response will be a correct one.
  • The candidate_count is the number of responses that the LLM will return.

In Mood Food, we used prompting to instruct PaLM API. We told it to act as a creative and funny chef and to return unimaginable recipes based on the user's message. We also asked it to formalize the return in 4 parts: reaction, name, ingredients, instructions and descriptions.

  • Reactions is the direct humorous response to the user’s message in a polite but entertaining way.
  • Name: recipe name. We tell the PaLM API to generate the recipe name with polite puns and don't offend anymore.
  • Ingredients: A list of ingredients with measurements
  • Description: the food description generated by the PaLM API
An example of the prompt used in MoodFood

Third Party Integration

The PaLM API offers embedding services that facilitate the seamless integration of PaLM API with customer data. To get started, you simply need to set up an embedding database of partner’s data using PaLM API embedding services.

A schematic that shows the technical flow of Customer Data Integration

Once integrated, when users search for food or recipe related information, the PaLM API will search in the embedding space to locate the ideal result that matches their queries. Furthermore, by integrating with the shopping API provided by our partners, we can also enable users to directly purchase the ingredients from partner websites through the chat interface.


Partnerships

Swiggy, an Indian online food ordering and delivery platform, expressed their excitement when considering the use cases made possible by experiences like MoodFood.

“We're excited about the potential of Generative AI to transform the way we interact with our customers and merchants on our platform. Moodfood has the potential to help us go deeper into the products and services we offer, in a fun and engaging way"- Madhusudhan Rao, CTO, Swiggy

Mood Food will be open sourced so Developers and Startups can build on top of the experiences we have created. Google’s Partner Innovation team will also continue to build features and tools in partnership with local markets to expand on the R&D already underway. View the project on GitHub here.


Acknowledgements

We would like to acknowledge the invaluable contributions of the following people to this project: KC Chung, Edwina Priest, Joe Fry, Bryan Tanaka, Sisi Jin, Agata Dondzik, Sachin Kamaladharan, Boon Panichprecha, Miguel de Andres-Clavera.

How to be more productive as a developer: 5 app integrations for Google Chat that can help

Posted by Mario Tapia, Product Marketing Manager, Google Workspace

In today's fast-paced and ever-changing world, it is more important than ever for developers to be able to work quickly and efficiently. With so many different tools and applications available, it can be difficult to know which ones will help you be the most productive. In this blog post, we will discuss five different DevOps application integrations for Google Chat that can help you improve your workflows and be more productive as a developer.

PagerDuty for Google Chat

PagerDuty helps automate, orchestrate, and accelerate responses to unplanned work across an organization. PagerDuty for Google Chat empowers developers, DevOps, IT operations, and business leaders to prevent and resolve business-impacting incidents for an exceptional customer experience—all from Google Chat. With PagerDuty for Google Chat, get notifications, see and share details with link previews, and act by creating or updating incidents.

How to: Use PagerDuty for Google Chat

Asana for Google Chat

Asana helps you manage projects, focus on what’s important, and organize work in one place for seamless collaboration. With Asana for Google Chat, you can easily create tasks, get notifications, update tasks, assign them to the right people, and track your progress.

How to: Use Asana for Google Chat

Jira

Jira makes it easy to manage your issues and bugs. With Jira for Google Chat, you can receive notifications, easily create issues, assign them to the right people, and track your progress while keeping everyone in the loop.

How to: Use Jira for Google Chat

Jenkins

Jenkins allows you to automate your builds and deployments. With Jenkins for Google Chat, development and operations teams can connect into their Jenkins pipeline and stay up to date by receiving software build notifications or trigger a build directly in Google Chat.

How to: Use Jenkins for Google Chat

GitHub

GitHub lets you manage your code and collaborate with your team. Integrations like GitHub for Google Chat make the entire development process fit easily into a developer’s workflow. With GitHub, teams can quickly push new commits, make pull requests, do code reviews, and provide real-time feedback that improves the quality of their code—all from Google Chat.

How to: Use GitHub for Google Chat

Next steps

These are just a few of the many different application integrations that can help you be more productive as a developer, check out the Google Workspace Marketplace for more integrations you or the team might already be using. By using the right tools and applications, you can easily stay connected with your team, manage your tasks and projects, and automate your builds and deployments.

To keep track of all the latest announcements and developer updates for Google Workspace please subscribe to our monthly newsletter or follow us @workspacedevs.

Helping Developers Build with Google, Matters

Posted by Jeannie Zhang and Kevin Po; Product Managers, Nest

As the smart home industry prepares for a major shift in usability and interoperability with Matter launching later this year, we are working to help you build more devices and connections with Google products and beyond.

At Google I/O this year, we shared updates on how Google is continuing to support smart home developers, including the launch of our new and improved Google Home Developer Center. Today, we are excited to share that the Google Home Developer Console is now in Developer Preview at console.home.google.com.

What is the Google Home Developer Console?


The Google Home Developer Console is a guided flow for developers looking to integrate with Google. It provides everything needed to build intelligent and innovative smart home products with Matter. By simplifying the process of building Matter-enabled smart home products, you can spend more time innovating with your devices and less time on the basics.

The console is a part of the Google Home Developer Center we announced earlier this year; the go-to starting place for anyone interested in developing smart home devices and apps with Google.

Google Home Device SDK


Along with this new console, we have also released two new software development kits to make building Matter devices with Google easier. We’ve created the Google Home Device SDK, which extends the open-source Matter SDK with development, testing, and go-to market tools; making it the fastest and easiest way to build Matter devices.

Created with both new and experienced smart home developers in mind, the Google Home Device SDK has tools such as code samples, code labs and a Matter virtual device to help you start building, integrating and testing your Matter devices with Google easily.

At I/O this year, we announced Intelligence Clusters, which will allow you to access Google intelligence about the home locally and directly on your Matter devices, using a similar structure to clusters within Matter. To protect the privacy and security of our users, we have built guardrails into our Intelligence Clusters, beginning with Home & Away, to ensure that user information is always encrypted, processed locally, and only with user consent and visibility. You can learn more about these guardrails and fill out our interest form here.

Google Home Mobile SDK


Apps are invaluable to the user experience for your devices, so we have also deployed the Google Home Mobile SDK, a tool to build Android Apps that connect directly with Matter devices. Our mobile SDK streamlines the setup process, creating a more consistent and reliable experience for Android users. These APIs make it easier to set up devices in your app, Google Home, and third party ecosystems, and to share devices with other ecosystems and apps.

Why build with Google?


Even with Matter making interoperability the standard, determining the best platform for your smart devices is still an important consideration. Google's end-to-end tools for Matter devices and apps complement your existing development platforms, accelerate time-to-market for your devices, improve reliability, and let you differentiate with Google Home while having interoperability with other Matter platforms.

Getting Started


Looking to get started building with Matter? Before hopping into the Google Home Developer Console, head over to our Get Started page to gather all the information you need to know before building.

We’re committed to supporting smart home developers that build and innovate with Google, by providing easy and high-quality resources. The latest tools are just an example of our ongoing commitment to be partners in this industry. We can’t wait to see what you build!

Migrating from App Engine Memcache to Cloud Memorystore (Module 13)

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

Introduction and background

The previous Module 12 episode of the Serverless Migration Station video series demonstrated how to add App Engine Memcache usage to an existing app that has transitioned from the webapp2 framework to Flask. Today's Module 13 episode continues its modernization by demonstrating how to migrate that app from Memcache to Cloud Memorystore. Moving from legacy APIs to standalone Cloud services makes apps more portable and provides an easier transition from Python 2 to 3. It also makes it possible to shift to other Cloud compute platforms should that be desired or advantageous. Developers benefit from upgrading to modern language releases and gain added flexibility in application-hosting options.

While App Engine Memcache provides a basic, low-overhead, serverless caching service, Cloud Memorystore "takes it to the next level" as a standalone product. Rather than a proprietary caching engine, Cloud Memorystore gives users the option to select from a pair of open source engines, Memcached or Redis, each of which provides additional features unavailable from App Engine Memcache. Cloud Memorystore is typically more cost efficient at-scale, offers high availability, provides automatic backups, etc. On top of this, one Memorystore instance can be used across many applications as well as incorporates improvements to memory handling, configuration tuning, etc., gained from experience managing a huge fleet of Redis and Memcached instances.

While Memcached is more similar to Memcache in usage/features, Redis has a much richer set of data structures that enable powerful application functionality if utilized. Redis has also been recognized as the most loved database by developers in StackOverflow's annual developers survey, and it's a great skill to pick up. For these reasons, we chose Redis as the caching engine for our sample app. However, if your apps' usage of App Engine Memcache is deeper or more complex, a migration to Cloud Memorystore for Memcached may be a better option as a closer analog to Memcache.

Migrating to Cloud Memorystore for Redis featured video

Performing the migration

The sample application registers individual web page "visits," storing visitor information such as IP address and user agent. In the original app, the most recent visits are cached into Memcache for an hour and used for display if the same user continuously refreshes their browser during this period; caching is a one way to counter this abuse. New visitors or cache expiration results new visits as well as updating the cache with the most recent visits. Such functionality must be preserved when migrating to Cloud Memorystore for Redis.

Below is pseudocode representing the core part of the app that saves new visits and queries for the most recent visits. Before, you can see how the most recent visits are cached into Memcache. After completing the migration, the underlying caching infrastructure has been swapped out in favor of Memorystore (via language-specific Redis client libraries). In this migration, we chose Redis version 5.0, and we recommend the latest versions, 5.0 and 6.x at the time of this writing, as the newest releases feature additional performance benefits, fixes to improve availability, and so on. In the code snippets below, notice how the calls between both caching systems are nearly identical. The bolded lines represent the migration-affected code managing the cached data.

Switching from App Engine Memcache to Cloud Memorystore for Redis

Wrap-up

The migration covered begins with the Module 12 sample app ("START"). Migrating the caching system to Cloud Memorystore and other requisite updates results in the Module 13 sample app ("FINISH") along with an optional port to Python 3. To practice this migration on your own to help prepare for your own migrations, follow the codelab to do it by-hand while following along in the video.

While the code migration demonstrated seems straightforward, the most critical change is that Cloud Memorystore requires dedicated server instances. For this reason, a Serverless VPC connector is also needed to connect your App Engine app to those Memorystore instances, requiring more dedicated servers. Furthermore, neither Cloud Memorystore nor Cloud VPC are free services, and neither has an "Always free" tier quota. Before moving forward this migration, check the pricing documentation for Cloud Memorystore for Redis and Serverless VPC access to determine cost considerations before making a commitment.

One key development that may affect your decision: In Fall 2021, the App Engine team extended support of many of the legacy bundled services like Memcache to next-generation runtimes, meaning you are no longer required to migrate to Cloud Memorystore when porting your app to Python 3. You can continue using Memcache even when upgrading to 3.x so long as you retrofit your code to access bundled services from next-generation runtimes.

A move to Cloud Memorystore and today's migration techniques will be here if and when you decide this is the direction you want to take for your App Engine apps. All Serverless Migration Station content (codelabs, videos, source code [when available]) can be accessed at its open source repo. While our content initially focuses on Python users, we plan to cover other language runtimes, so stay tuned. For additional video content, check out our broader Serverless Expeditions series.

Helping Developers Create Meaningful Voice Interactions with Android

Helping Developers Create Meaningful Voice Interactions with Android

Posted by Rebecca Nathenson, Director, Product Management

As we recently announced at I/O, we’re investing in new ways to make Google Assistant your go-to conversational helper for everyday tasks. And we couldn’t do that without a rich community of developers. While Conversational Actions were an excellent way to experiment with voice, the ecosystem has evolved significantly over the last 5 years and we’ve heard some important feedback: users want to engage with their favorite apps using voice, and developers want to build upon their existing investments in Android.

In response to that feedback, we’ve decided to focus our efforts on making App Actions with Android the best way for developers to create deeper, more meaningful voice-forward experiences. As a result, we will turn down Conversational Actions one year from now, in June 2023.

Improving voice-forward experiences

Whether someone asks Assistant to start a workout, order food, or schedule a grocery pickup, we know users are looking for ways to get things done more naturally using voice. To allow developers to integrate those helpful voice experiences into existing Android content more easily – without having to build from scratch – we’re committed to working with them to build App Actions with Android. This will give users more ways to engage with an app’s content – like voice queries and proactive suggestions – and access the app features they already know and love.

We’re continuing to expand the reach of App Actions in the following ways:

  • Integrating voice capabilities across Android devices such as mobile, auto, wearables and other devices in the home;
  • Bringing more traffic without more development work (i.e. Assistant can now direct users to apps even when queries don’t mention an app name);
  • Driving users to the app’s Play Store page if they don’t have the app installed yet; and
  • Surfacing in ‘All Apps’ search for Pixel 6 users.

App Actions not only make your apps easier to discover; you can offer deeper voice experiences by allowing users to simply ask for what they need in their queries. Moreover, we’ll continue investing in all of the popular Assistant experiences users love, like Timers, Media, Home Automation, Communications, and more.

Supporting our developers

We know that these changes aren’t easy, which is why we’re giving developers a year to prepare for the turndown of Conversational Actions. We’re here to help you navigate this transition with these helpful resources:

Building the future together

Looking ahead, we envision a platform that is intuitive, natural, and voice-forward – and one that allows developers to leverage the entire Android ecosystem of devices so they can easily reach more users. We’re always looking to improve the Assistant experience and we’re confident that App Actions is the best way to do that. We’re grateful for all you’ve done to build the Google Assistant ecosystem over the past 5 years and we’re here to help navigate the changes as we continue to make it even better. We’re excited about what lies ahead and we’re grateful to build it together.