Google Workspace Updates Weekly Recap – April 4, 2025

New updates

Unless otherwise indicated, the features below are available to all Google Workspace customers, and are fully launched or in the process of rolling out. Rollouts should take no more than 15 business days to complete if launching to both Rapid and Scheduled Release at the same time. If not, each stage of rollout should take no more than 15 business days to complete.

Improving the email signature experience for the Gmail app on Android devices 
Starting this week, if no mobile signature is set in the Gmail app on your Android device, your web signature will be inserted when drafting an email. This signature will include support for images, logos, and text formatting just as it appears when sending from web Gmail. If you prefer not to use the web signature on your Android device, you can set a non-empty mobile signature (such as your name). | Rolling out to Rapid Release domains now on Android devices; launch to Scheduled Release domains planned for April 21, 2025 on Android devices. | Rollout to Rapid Release and Scheduled Release domains is complete on iOS devices. | Available to all Google Workspace customers, Workspace Individual Subscribers, and users with personal Google accounts. | Visit the Help Center to learn more about creating a Gmail signature.
mobile signature in Gmail on Android devices
Set default colors and fonts for new scenes and objects in Google Vids 
We’re excited to announce that users can now set default colors and fonts for newly inserted scenes and objects like shapes and textboxes in Google Vids. To do so, in the toolbar, click Customize Styles > click New objects and scenes at the bottom right of the “Styles” panel. From there you can set colors and fonts for individual object types. | Rolling out now to Rapid Release and Scheduled Release domains. | Available to Business Standard and Plus, Enterprise Standard and Plus, Essentials, Enterprise Essentials and Enterprise Essentials Plus and Education Plus. | Visit the Help Center to learn more about Google Vids.
Set default colors and fonts for new scenes and objects in Google Vids


Group by view aggregation now available for tables in Google Sheets 
Last year we announced that when using tables in Google Sheets, you’ll have access to our new type of view, group by, where you can aggregate your data into groups based on a selected column. This week, we’re excited to introduce the ability to apply column-level aggregations when in a group by view. | Rolling out to Rapid Release domains now; launch to Scheduled Release domains planned for April 21, 2025. | Available to Google Workspace customers, Google Workspace Individual subscribers, and users with personal Google accounts. | Visit the Help Center to learn more about using tables in Google Sheets
Group by view aggregation now available for tables in Google Sheets

Google Workspace apps are now available for the Gemini mobile app 
Workspace apps (formerly known as Workspace Extensions) for the Gemini app are now available on Android and iOS devices in open beta. When enabled, Gemini can connect across your apps, like Gmail, Docs, Calendar, and Drive, to provide more context to your prompts without the need to switch between multiple apps. We expect general availability to begin rolling out by the end of April 2025. | Rollout to Rapid Release and Scheduled Release domains is complete. | Available to all Google Workspace users with access to the Gemini app as a core service. | Admins can visit the Help Center to learn more about turning Google Workspace apps on or off in Gemini. | End users can Visit the Help Center to learn more about using Google Workspace apps in Gemini.


Previous announcements

The announcements below were published on the Workspace Updates blog earlier this week. Please refer to the original blog posts for complete details.


New sidebar with design elements in Google Slides makes building presentations easier 
To help you find the features you need when creating, building and presenting Google Slides faster, users will now be able to access templates, image generation, new design components and more in a new sidebar on the right side of your canvas. | Learn more about the new sidebar in Slides. 

Introducing Audio Overviews, now available in the Gemini app 
Leveraging the same technology that powers NotebookLM’s Audio Overviews, Gemini app users can now generate podcast-style conversations based on documents, slides, and Deep Research reports. | Learn more about Audio Overviews in the Gemini app

Help me create in Google Docs now available in seven additional languages 
Help me create in Docs is now available in seven additional languages: Spanish, Portuguese, Japanese, Korean, Italian, French, and German. | Learn more about language availability for Help me create.

Support for continuous framing on select Logitech Google Meet on ChromeOS 
If you’re joining a meeting from a Logitech Rally Bar, Rally Bar Mini, and Rally Bar Huddle device, you can now take advantage of the devices built-in continuous framing. | Learn more about framing on select Logitech Google Meet on ChromeOS. 

Pre-configure the Google Workspace apps admin setting for the Gemini app ahead of general availability 
Workspace apps (formerly known as Workspace Extensions) for the Gemini app will soon transition from open beta to general availability. As part of this transition, we will turn the Workspace apps admin setting ON by default. We expect general availability to begin rolling out by the end of April 2025. | Learn more about the admin setting

Introducing updates to sources for NotebookLM and NotebookLM Plus 
We’re introducing additional updates that improve upon the NotebookLM user experience as it relates to sources. | Learn more about new NotebookLM updates. 

NotebookLM and the Gemini app are now Core Services with enterprise-grade data protection for all education customers 
The Gemini app is now a Core Service for all education customers, NotebookLM is now a Core Service for all education customers and NotebookLM Plus, the premium version of NotebookLM, is now a Core Service for customers with Gemini Education and Gemini Education Premium add-ons. | Learn more about NotebookLM and the Gemini App.

More options for exporting your Google Workspace data are available in open beta
Beginning today, Admins can choose from several options when exporting their organization’s data (also known as ‘takeout’). | Learn more about exporting your Google Workspace data are available in open beta.


Completed rollouts

The features below completed their rollouts to Rapid Release domains, Scheduled Release domains, or both. Please refer to the original blog posts for additional details.


Rapid Release Domains: 
Scheduled Release Domains: 
Rapid and Scheduled Release Domains: 

    For a recap of announcements in the past six months, check out What’s new in Google Workspace (recent releases).

    Google announces Sec-Gemini v1, a new experimental cybersecurity model




    Today, we’re announcing Sec-Gemini v1, a new experimental AI model focused on advancing cybersecurity AI frontiers. 



    As outlined a year ago, defenders face the daunting task of securing against all cyber threats, while attackers need to successfully find and exploit only a single vulnerability. This fundamental asymmetry has made securing systems extremely difficult, time consuming and error prone. AI-powered cybersecurity workflows have the potential to help shift the balance back to the defenders by force multiplying cybersecurity professionals like never before.


     

    Effectively powering SecOps workflows requires state-of-the-art reasoning capabilities and extensive current cybersecurity knowledge. Sec-Gemini v1 achieves this by combining Gemini’s advanced capabilities with near real-time cybersecurity knowledge and tooling. This combination allows it to achieve superior performance on key cybersecurity workflows, including incident root cause analysis, threat analysis, and vulnerability impact understanding.



    We firmly believe that successfully pushing AI cybersecurity frontiers to decisively tilt the balance in favor of the defenders requires a strong collaboration across the cybersecurity community. This is why we are making Sec-Gemini v1 freely available to select organizations, institutions, professionals, and NGOs for research purposes.



    Sec-Gemini v1 outperforms other models on key cybersecurity benchmarks as a result of its advanced integration of Google Threat Intelligence (GTI), OSV, and other key data sources. Sec-Gemini v1 outperforms other models on CTI-MCQ, a leading threat intelligence benchmark, by at least 11% (See Figure 1). It also outperforms other models by at least 10.5% on the CTI-Root Cause Mapping benchmark (See Figure 2):





    Figure 1: Sec-Gemini v1 outperforms other models on the CTI-MCQ Cybersecurity Threat Intelligence benchmark.







    Figure 2: Sec-Gemini v1 has outperformed other models in a Cybersecurity Threat Intelligence-Root Cause Mapping (CTI-RCM) benchmark that evaluates an LLM's ability to understand the nuances of vulnerability descriptions, identify vulnerabilities underlying root causes, and accurately classify them according to the CWE taxonomy.




    Below is an example of the comprehensiveness of Sec-Gemini v1’s answers in response to key cybersecurity questions. First, Sec-Gemini v1 is able to determine that Salt Typhoon is a threat actor (not all models do) and provides a comprehensive description of that threat actor, thanks to its deep integration with Mandiant Threat intelligence data.









    Next, in response to a question about the vulnerabilities in the Salt Typhoon description, Sec-Gemini v1 outputs not only vulnerability details (thanks to its integration with OSV data, the open-source vulnerabilities database operated by Google), but also contextualizes the vulnerabilities with respect to threat actors (using Mandiant data). With Sec-Gemini v1, analysts can understand the risk and threat profile associated with specific vulnerabilities faster.








    If you are interested in collaborating with us on advancing the AI cybersecurity frontier, please request early access to Sec-Gemini v1 via this form.








    Chrome Dev for Desktop Update

    The Dev channel has been updated to 137.0.7106.2 for Windows, Mac and Linux.

    A partial list of changes is available in the Git log. Interested in switching release channels? Find out how. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.

    Chrome Release Team
    Google Chrome

    Taming the Wild West of ML: Practical Model Signing with Sigstore



    In partnership with NVIDIA and HiddenLayer, as part of the Open Source Security Foundation, we are now launching the first stable version of our model signing library. Using digital signatures like those from Sigstore, we allow users to verify that the model used by the application is exactly the model that was created by the developers. In this blog post we will illustrate why this release is important from Google’s point of view.



    With the advent of LLMs, the ML field has entered an era of rapid evolution. We have seen remarkable progress leading to weekly launches of various applications which incorporate ML models to perform tasks ranging from customer support, software development, and even performing security critical tasks.



    However, this has also opened the door to a new wave of security threats. Model and data poisoning, prompt injection, prompt leaking and prompt evasion are just a few of the risks that have recently been in the news. Garnering less attention are the risks around the ML supply chain process: since models are an uninspectable collection of weights (sometimes also with arbitrary code), an attacker can tamper with them and achieve significant impact to those using the models. Users, developers, and practitioners need to examine an important question during their risk assessment process: “can I trust this model?”



    Since its launch, Google’s Secure AI Framework (SAIF) has created guidance and technical solutions for creating AI applications that users can trust. A first step in achieving trust in the model is to permit users to verify its integrity and provenance, to prevent tampering across all processes from training to usage, via cryptographic signing. 



    The ML supply chain

    To understand the need for the model signing project, let’s look at the way ML powered applications are developed, with an eye to where malicious tampering can occur.



    Applications that use advanced AI models are typically developed in at least three different stages. First, a large foundation model is trained on large datasets. Next, a separate ML team finetunes the model to make it achieve good performance on application specific tasks. Finally,  this fine-tuned model is embedded into an application.



    The three steps involved in building an application that uses large language models.



    These three stages are usually handled by different teams, and potentially even different companies, since each stage requires specialized expertise. To make models available from one stage to the next, practitioners leverage model hubs, which are repositories for storing models. Kaggle and HuggingFace are popular open source options, although internal model hubs could also be used.



    This separation into stages creates multiple opportunities where a malicious user (or external threat actor who has compromised the internal infrastructure) could tamper with the model. This could range from just a slight alteration of the model weights that control model behavior, to injecting architectural backdoors — completely new model behaviors and capabilities that could be triggered only on specific inputs. It is also possible to exploit the serialization format and inject arbitrary code execution in the model as saved on disk — our whitepaper on AI supply chain integrity goes into more details on how popular model serialization libraries could be exploited. The following diagram summarizes the risks across the ML supply chain for developing a single model, as discussed in the whitepaper.



    The supply chain diagram for building a single model, illustrating some supply chain risks (oval labels) and where model signing can defend against them (check marks)



    The diagram shows several places where the model could be compromised. Most of these could be prevented by signing the model during training and verifying integrity before any usage, in every step: the signature would have to be verified when the model gets uploaded to a model hub, when the model gets selected to be deployed into an application (embedded or via remote APIs) and when the model is used as an intermediary during another training run. Assuming the training infrastructure is trustworthy and not compromised, this approach guarantees that each model user can trust the model.



    Sigstore for ML models

    Signing models is inspired by code signing, a critical step in traditional software development. A signed binary artifact helps users identify its producer and prevents tampering after publication. The average developer, however, would not want to manage keys and rotate them on compromise.



    These challenges are addressed by using Sigstore, a collection of tools and services that make code signing secure and easy. By binding an OpenID Connect token to a workload or developer identity, Sigstore alleviates the need to manage or rotate long-lived secrets. Furthermore, signing is made transparent so signatures over malicious artifacts could be audited in a public transparency log, by anyone. This ensures that split-view attacks are not possible, so any user would get the exact same model. These features are why we recommend Sigstore’s signing mechanism as the default approach for signing ML models.



    Today the OSS community is releasing the v1.0 stable version of our model signing library as a Python package supporting Sigstore and traditional signing methods. This model signing library is specialized to handle the sheer scale of ML models (which are usually much larger than traditional software components), and handles signing models represented as a directory tree. The package provides CLI utilities so that users can sign and verify model signatures for individual models. The package can also be used as a library which we plan to incorporate directly into model hub upload flows as well as into ML frameworks.



    Future goals

    We can view model signing as establishing the foundation of trust in the ML ecosystem. We envision extending this approach to also include datasets and other ML-related artifacts. Then, we plan to build on top of signatures, towards fully tamper-proof metadata records, that can be read by both humans and machines. This has the potential to automate a significant fraction of the work needed to perform incident response in case of a compromise in the ML world. In an ideal world, an ML developer would not need to perform any code changes to the training code, while the framework itself would handle model signing and verification in a transparent manner.



    If you are interested in the future of this project, join the OpenSSF meetings attached to the project. To shape the future of building tamper-proof ML, join the Coalition for Secure AI, where we are planning to work on building the entire trust ecosystem together with the open source community. In collaboration with multiple industry partners, we are starting up a special interest group under CoSAI for defining the future of ML signing and including tamper-proof ML metadata, such as model cards and evaluation results.

    More options for exporting your Google Workspace data are available in open beta

    What’s changing

    Beginning today, Admins can choose from several options when exporting their organization’s data (also known as ‘takeout’). Specifically, they will be able to:

    • Export data from all or multiple services, such as Gmail, Chat, or Drive.
    • Export Drive data based on existing Drive labels
    • Export data from within a specific date range
    • Export data from selected shared drives


    Exporting data for specific services

    Exporting data based on specific Shared Drives

    Exporting data based on specific or custom date range

    Exporting data based on specific Drive labels







    This update is available in open beta, which means no additional sign-up is required.


    Who’s impacted

    Admins

    Why it’s important

    In 2022, we introduced the ability for admins to export user generated content by organizational unit (OU) or group, and prior to that update, data export was limited to a customer’s full set of user generated content. With this update, we continue to give our customers more flexibility and specificity over the data they export, which is important as business and compliance needs evolve.

    Getting started


    Rollout pace


    Availability

    Resources


    More options for exporting your Google Workspace data are available in open beta

    What’s changing

    Beginning today, Admins can choose from several options when exporting their organization’s data (also known as ‘takeout’). Specifically, they will be able to:

    • Export data from all or multiple services, such as Gmail, Chat, or Drive.
    • Export Drive data based on existing Drive labels
    • Export data from within a specific date range
    • Export data from selected shared drives


    Exporting data for specific services

    Exporting data based on specific Shared Drives

    Exporting data based on specific or custom date range

    Exporting data based on specific Drive labels







    This update is available in open beta, which means no additional sign-up is required.


    Who’s impacted

    Admins

    Why it’s important

    In 2022, we introduced the ability for admins to export user generated content by organizational unit (OU) or group, and prior to that update, data export was limited to a customer’s full set of user generated content. With this update, we continue to give our customers more flexibility and specificity over the data they export, which is important as business and compliance needs evolve.

    Getting started


    Rollout pace


    Availability

    Resources