Category Archives: Google Cloud Platform Blog

Product updates, customer stories, and tips and tricks on Google Cloud Platform

GCP is building its second Japanese region in Osaka



Since we launched the Tokyo region in 2016, Google Cloud Platform (GCP) has emerged as a leading destination for Asian-Pacific businesses that want to build applications in the cloud. To fulfill this growth, we’re building a second Japanese GCP region in Osaka.

Osaka is a large port city and a leading commercial center, and will be our seventh region in Asia Pacific, joining our future region in Hong Kong, and existing regions in Mumbai, Sydney, Singapore, Taiwan and Tokyo. Overall, the Osaka region brings the total number of existing and announced GCP regions around the world to 19—with more to come!
The Osaka region will open in 2019, and it will make it easier for Japanese companies to build highly available, performant applications. Customers will benefit from lower latency for their cloud-based workloads and data. The region is also designed for high availability, launching with three zones to protect against service disruptions.

We look forward to welcoming you to the GCP Osaka region, and we’re excited to see what you build with our platform. Our locations page provides updates on the availability of additional services and regions. Contact us to request early access to new regions and help us prioritize what we build next.

Announcing Spring Cloud GCP—integrating your favorite Java framework with Google Cloud



For many years, the Spring Framework has been an innovative force in the Java ecosystem. Spring and its vast ecosystem are widely adopted, and are among the most popular Java frameworks. To do more for developers in the Spring community and meet our developers where they are, we’re announcing the Spring Cloud GCP project, a collaboration with Pivotal to better integrate Spring with Google Cloud Platform (GCP), so that running your Spring code on our platform is as easy as possible.

Spring Boot takes an opinionated view of the Spring platform and third-party libraries, making it easy to create stand-alone, production-grade Spring-based applications. With minimal configuration, Spring Boot provides your application with fully configured Java objects, getting you from nothing to a highly functional application in minutes.

By focusing on Spring Boot support, Spring Cloud GCP allows you to greatly cut down on boilerplate code and consume GCP services in a Spring-idiomatic way. In most cases, you won't even need to change your code to take advantage of GCP services.

As part of Spring Cloud GCP, we created integrations between popular Spring libraries and GCP services:

Google Cloud Platform Spring Framework Description
Cloud SQL Spring JDBC Spring Cloud GCP SQL automatically configures the JDBC URLs and driver class names and helps establish secure SSL connection using client certificates.
Cloud Pub/Sub Spring Integration Use Spring Integration concepts like channels, gateways, etc. and sending/receiving messages from Cloud Pub/Sub.
Cloud Storage Spring Resource Use Spring Resource objects to access and store files in Cloud Storage buckets.
Stackdriver Trace Spring Cloud Sleuth Use Spring Cloud Sleuth and its annotations to trace your microservices and send the trace data to Stackdriver Trace for storage and analysis.
Runtime Configuration API Spring Cloud Config Store and access configuration values in managed Runtime Configuration service without running your own config server.

From Milestone 2, all the above integrations are compatible with the latest Spring Framework 5 and Spring Boot 2.

The Spring Cloud GCP libraries are in Beta stage and are available from Pivotal’s Milestones Maven Repository.

To get started, check out the code samples, reference documentation, the Spring Cloud GCP project page and the Spring Cloud code labs! More resources are available on the GCP Spring documentation. We would also love to hear from you at our GitHub issue tracker.

We’re working on other exciting integrations and planning for general availability soon. So stay tuned for more news!

Why we used Elastifile Cloud File System on GCP to power drug discovery



[Editor’s note: Last year, Silicon Therapeutics talked about how they used Google Cloud Platform (GCP) to perform massive drug discovery virtual screening. In this guest post, they discuss the performance and management benefits they realized from using the Elastifile Cloud File System and CloudConnect. If you’re looking for a high-performance file system that integrates with GCP, read on to learn more about the environment they built.]

Here, at Silicon Therapeutics, we’ve seen the benefits of GCP as a platform for delivering massive scale-out compute, and have used it as an important component of our drug discovery workload. For example, in our past post we highlighted the use of GCP for screening millions of compounds against a conformational ensemble of a flexible protein target to identify putative drug molecules.

However, like a lot of high-performance computing workflows, we encounter data challenges. It turns out, there are a lot of data management and storage considerations involved with running one of our core applications, molecular dynamics (MD) simulations, which involve the propagation of atoms in a molecular system over time. The time-evolution of atoms is determined by numerically solving Newton's equations of motion, where forces between the atoms are calculated using molecular mechanics force fields. These calculations typically generate thousands of snapshots containing the atomic coordinates, each with tens of thousands of atoms, resulting in relatively large trajectory files. As such, running MD on a large dataset (e.g. the entirety of the ~100,000 structures in the Protein Data Bank (PDB)) could generate a lot of data (over a petabyte).

In scientific computing, decreasing the overall time-to-result and increasing accuracy are crucial in helping to discover treatments for illnesses and diseases. In practice, doing so is extremely difficult due to the ever-increasing volume of data and the need for scalable, high-performance, shared data access and complex workflows. Infrastructure challenges, particularly around file storage, often consume valuable time that could be better spent on core research, thus slowing the progress of critical science.

Our physics-based workflows create parallel processes that generate massive amounts of data, quickly. Supporting these workflows requires flexible, high-performance IT infrastructure. Furthermore, analyzing the simulation results to find patterns and discover new druggable targets means sifting through all that data—in the case of this run, over one petabyte. That kind of infrastructure would be prohibitively expensive to build internally.

The public cloud is a natural fit for our workflows, since in the cloud, we can easily apply thousands of parallel compute nodes to a simulation or analytics job. However, while cloud is synonymous with scalable, high-performance compute, delivering complementary scalable, high-performance storage in the cloud can be problematic. We’re always searching for simpler, more efficient ways to store, manage, and process data at scale, and found that the combination of GCP and the Elastifile cross-cloud data fabric could help us resolve our data challenges, thus accelerating the pace of research.
Our HPC architecture used Google Compute Engine CPUs and GPUs, Elastifile for distributed file storage, and Google Cloud Storage plus Elastifile to manage inactive data.

Why high-performance, scale-out file storage is crucial


To effectively support our bursty molecular simulation and analysis workflows, we needed a cloud storage solution that could satisfy three key requirements:

  • File-native primary storage - Like many scientific computing applications, the analysis software for our molecular simulations was written to generate and ingest data in file format from a file system that ensures strict consistency. These applications won’t be refactored to interface directly with object storage systems like Google Cloud Storage any time soon—hence the need for a cloud-based, POSIX-compliant file system. 
  • Scalable global namespace - Stitching together file servers on discrete cloud instances may suffice for simple analyses on small data sets. However, the do-it-yourself method comes up short as datasets grow and when you need to share data across applications (e.g., in multi-stage workflows). We needed a modern, fully-distributed, shared file system to deliver the scalable, unified namespace that our workflows require. 
  • Cost-effectiveness - Finally, when managing bursty workloads at scale, rigid storage infrastructure can be prohibitively expensive. Instead, we needed a solution that could be rapidly deployed/destroyed, to keep our infrastructure costs aligned to demand. And ideally, for maximum flexibility, we also wanted a solution that could facilitate data portability, both 1) between sites and clouds, and 2) between formats—file format for “active” processing and object format for cost-optimized “inactive” storage/archival/backup.


Solving the file storage problem


To meet our storage needs and support the evolving requirements of our research, we worked with Elastifile, whose cross-cloud data fabric was the backbone of our complex molecular dynamics workflow.

The heart of the solution is the Elastifile Cloud File System (ECFS), a software-only, distributed file system designed for performance and scalability in cloud and hybrid-cloud environments. Built to support the noisy, heterogeneous environments encountered at cloud-scale, ECFS is well-suited to primary storage for data-intensive scientific computing workflows. To facilitate data portability and policy-based controls, Elastifile file systems are exposed to applications via Elastifile “data containers.” Each file system can span any number of cloud instances within a single namespace, while maintaining the strict consistency required to support parallel, transactional applications in complex workflows.

By deploying ECFS on GCP, we were able to simplify and optimize a molecular dynamics workflow. We then applied it to 500 unique proteins as a proof of concept for the aforementioned PDB-wide screen. For this computation, we leveraged a SLURM cluster running on GCP. The compute nodes were 16 n1-highcpu-32 instances, with 8 GPUs attached to every instance for a total of 120 K80 GPUs and 512 CPUs. The storage capacity was provided by a 6 TB Elastifile data container mounted on all the compute nodes.
Defining SLURM configuration to allocate compute and storage resources
Before Elastifile, provisioning and managing storage for such workflows was a complex, manual process. We partitioned the input datasets manually and created several different clusters, each with their own disks. This was because a single large disk often led to NFS issues, specifically with large metadata. In the old world, once the outputs of each cluster were completed, we stored the disks as snapshots. For access, we spun up an instance and shared the credentials for data access. This access pattern was error-prone as well as insecure. Also, at scale, manual processes such as these are time-consuming and introduce risk of critical errors and/or data loss.

With Elastifile, however, deploying and managing storage resources was quick and easy. We simply specified the desired storage capacity, and the ECFS cluster was automatically deployed, configured and made instantly available to the SLURM-managed compute resources . . . all in a matter of minutes. Also, if we want, we can expand the cluster later for additional capacity, with the push of a button. This future-proofs the infrastructure to be able to handle dynamically changing workflow requirements and data scale. By simplifying and automating the deployment process for a cloud-based file system, Elastifile reduced the complexity and risk associated with manual storage provisioning.
Specifying desired file system attributes and policies via Elastifile's unifed management console
In addition, by leveraging Elastifile’s CloudConnect service, we were able to seamlessly promote and demote data between ECFS and Cloud Storage, minimizing infrastructure costs. Elastifile CloudConnect makes it easy to move the data to Google buckets from Elastifile’s data container, and once the data has moved, we can tear down the Elastifile infrastructure, reducing unnecessary costs.
Leveraging Elastifile's CloudConnect UI to monitor progress of data "check in" and "check out" operations between file and object storage
This data movement is essential to our operations, since we need to visualize and analyze subsets of this data on our local desktops. Moving forward, leveraging Elastifile’s combination of data performance, parallelism, scalability, shareability and portability will help us perform more—and larger-scale—molecular analyses in shorter periods of time. This will ultimately help us find better drug candidates, faster.
Visualizing the protein structure, based on the results of the molecular dynamics analyses
As a next step, we’ll work to scale the workflow to all of the unique protein structures in the PDB and perform deep-learning analysis on the resulting data to find patterns associated with proteins dynamics, druggability and tight-binding ligands.

To learn more about how Elastifile supports highly-parallel, on-cloud molecular analysis on GCP, check out this demo video and be sure to visit them at www.elastifile.com.

Toward effective cloud governance: designing policies for GCP customers large and small



When it comes to security and governance, not all orgs are created equal. A mom-and-pop shop has different needs than a large enterprise, and startups have different requirements than, say, a local government.

Google Cloud Platform (GCP) customers come in all shapes and sizes, and so do the identity and access management policies that they put in place. Whether you work for a small company and wear many hats, or for a large enterprise with a clearly defined role, you need a policy baseline to implement your GCP environment.

To get you off to a good start, we've written a series of articles that look at typical customer environments and their identity postures. Using a hypothetical customer, each article shows you how to design GCP policies that meet the policy requirements of the reference organization.

In a first phase, we’ve published use cases about the following organizations:

  • Enterprise customers can have complex organizational structures and mature policies often developed over many years. Typically, they have many users to consider and manage.
  • Startups typically have simpler policy requirements and need to be able to move quickly. However, they still need to ensure that appropriate safeguards are in place, particularly around protection of intellectual property.
  • Education and training providers need to be able to automatically create and destroy safe and sandboxed student environments.

In addition to these articles, we also published a tutorial based on the fictional startup customer to guide you through many of the implementation steps. You can find the tutorial here.

Of course, this is just the beginning, and we are well aware that one size doesn't fit all  or even most! So we encourage you to read them all and blend their guidance to fit your specific use case. In the meantime, if you have any suggestions for more use cases, please let us know we'll add them to our list.

How to use Weaveworks free tier for continuous delivery, monitoring and alerts for Kubernetes Engine



Editor’s Note: Today we hear from our partner Weaveworks, which recently integrated its Weave Cloud container and microservices management tools with Google Kubernetes Engine. Read on to learn how Weave Cloud can make it easier to deploy and monitor your applications to Kubernetes Engine.

At Weaveworks, our goal is to help developers create and operate Kubernetes-based applications. Any developer can sign up for Weave Cloud, hook up their git repository and get continuous delivery, observability, metrics, dashboards and alerts immediately.

We recently launched a free tier of Weave Cloud for Google Cloud Platform (GCP) users. If you’re just getting started with Kubernetes, bringing together Google Cloud Platform’s free tier with Weave Cloud’s free tier creates a powerful development and operations stack that you can use to get started developing Kubernetes-based applications.

Weave Cloud brings Kubernetes’ powerful automation together with our continuous deployment service. Development and devops teams benefit from being able to continuously deploy, visually control and monitor all the services within the cluster: Continuous deployment increases development velocity. Automation means spending more time focused on development. Advanced observability helps with resolving issues quicker, therefore increasing reliability.

The first screenshot shows a deployment taking place with a dashboard that checks whether the new code has improved performance or whether it should be rolled back.
Weave Cloud also lets you observe and drill into the cluster and microservices running on it  great for both learning and troubleshooting applications. This screenshot shows Weave Cloud Explore, which shows an interactive map of the Kubernetes cluster and the applications running on it.

Getting started with Weave Cloud and Kubernetes Engine


Connecting Weave Cloud for continuous delivery, monitoring and alerts with Kubernetes Engine requires just a couple of simple steps.

Subscribe to Weave Cloud  Weave Cloud is available from Cloud Launcher, a collection of preconfigured development stacks, solutions and services for GCP. Simply find Weave Cloud and subscribe to it.

The Standard subscription provides you with one node's worth of free time each month.

Set Weave Cloud permissions  After subscribing to Weave Cloud, you need to allow your Google Account to log in, giving it permissions to view your subscription data so that you can be billed via Google.

Add the Weave Cloud agent to the Kubernetes Engine cluster   Weave Cloud works through an agent running on your Kubernetes Engine cluster. (If you don’t already have a cluster, see the Kubernetes Engine Quickstart to create one.) Then copy the commands that include your unique service token and run them in your cluster.

Congratulations! As soon as the agents are connected, you’ll see them on Weave Cloud, and you can start to use the Explore feature to observe your Kubernetes Engine cluster. You can also create a continuous deployment pipeline that takes code right from commit to deploying it into the cluster. (See the Weaveworks GCP page for the steps on how to do this.)

We hope you found this introduction easy, fast and useful! We’d love to hear from you and are always available to help. You can reach us either on Slack or Twitter.

Use Forseti to make sure your Google Kubernetes Engine clusters are updated for “Meltdown" and “Spectre”



Last month, Project Zero disclosed details about CPU vulnerabilities that have been referred to as “Meltdown” and “Spectre,” and we let you know that Google Cloud has been updated to protect against all known vulnerabilities.

Customers running virtual machines (VMs) on Google Cloud services should continue to follow security best practices and regularly apply all security updates, just as they would for any other operating system vulnerability. We provided a full list of recommended actions for GCP customers to protect against these vulnerabilities.

One recommended action is to update all Google Kubernetes Engine clusters to ensure the underlying VM image is fully patched. You can do this automatically by enabling auto-upgrade on your Kubernetes node pools. Want to make sure all your clusters are running a version patched against these CPU vulnerabilities? The Google Cloud security team developed a scanner that can help.

The scanner is now available within Forseti Security, an open-source security toolkit for GCP, allowing you to quickly identify any Kubernetes Engine clusters that have not yet been patched.

If you’ve already installed Forseti, you’ll need to upgrade to version 1.1.10 and enable the scanner. If not, install Forseti Security on a new project in your GCP organization. The scanner will check the version of the node pools in all Kubernetes Engine clusters running in all your GCP projects on an hourly basis. Forseti writes any violations it finds to its violations table, and optionally sends an email to your GCP admins, to help you identify any lingering Meltdown exposure.

The Forseti toolkit can be used in many different ways to help you stay secure. To learn more about the Forseti community, check out this blog post. Contact [email protected] if you have any questions about this tool.

Use Forseti to make sure your Google Kubernetes Engine clusters are updated for “Meltdown" and “Spectre”



Last month, Project Zero disclosed details about CPU vulnerabilities that have been referred to as “Meltdown” and “Spectre,” and we let you know that Google Cloud has been updated to protect against all known vulnerabilities.

Customers running virtual machines (VMs) on Google Cloud services should continue to follow security best practices and regularly apply all security updates, just as they would for any other operating system vulnerability. We provided a full list of recommended actions for GCP customers to protect against these vulnerabilities.

One recommended action is to update all Google Kubernetes Engine clusters to ensure the underlying VM image is fully patched. You can do this automatically by enabling auto-upgrade on your Kubernetes node pools. Want to make sure all your clusters are running a version patched against these CPU vulnerabilities? The Google Cloud security team developed a scanner that can help.

The scanner is now available within Forseti Security, an open-source security toolkit for GCP, allowing you to quickly identify any Kubernetes Engine clusters that have not yet been patched.

If you’ve already installed Forseti, you’ll need to upgrade to version 1.1.10 and enable the scanner. If not, install Forseti Security on a new project in your GCP organization. The scanner will check the version of the node pools in all Kubernetes Engine clusters running in all your GCP projects on an hourly basis. Forseti writes any violations it finds to its violations table, and optionally sends an email to your GCP admins, to help you identify any lingering Meltdown exposure.

The Forseti toolkit can be used in many different ways to help you stay secure. To learn more about the Forseti community, check out this blog post. Contact [email protected] if you have any questions about this tool.

GCP arrives in Canada with launch of Montréal region



Our fifteenth Google Cloud Platform region and first region in Canada is now open for you to build applications and store data, and promises to significantly improve latency for GCP customers and end users in the area.*

The new Montréal region, northamerica-northeast1, joins Oregon, Iowa, South Carolina and Northern Virginia in North America and makes it easier to build highly available, performant applications using resources across those geographies.

Hosting applications in the new region can improve latency by up to 90% for end users in Montréal, compared to hosting them in the closest region. Please visit www.gcping.com to see how fast Montréal is for yourself.

Services


The Montréal region has everything you need to build the next great application:
The region also has three zones, allowing you to distribute apps and storage across multiple zones and protect against service disruptions.

Interested in a GCP service that’s not available in the Canada region? No problem. You can access this service via the Google Network, the largest cloud network as measured by number of points of presence, and combine any of the services you deploy in Montréal with other GCP services around the world such as Data Loss Prevention, Cloud Spanner and BigQuery.

Google Cloud network


One of advantages of using Google Cloud is our global networking infrastructure. This private network provides a high-bandwidth, highly reliable, low-latency link to each region across the world. With it, you can reach the Montréal region as easily and as securely as, say, our São Paulo, Sydney or Tokyo regions. In addition, the global Google Cloud Load Balancing makes it easy to deploy truly global applications. For more information on Google's private network, visit Google Network Tiers.

And if you’re looking at hybrid deployments and require dedicated connections to Google Cloud, we provide two Dedicated Interconnect options in Montréal through Cologix.

New storage pricing


We're also launching reduced prices for Google Cloud Storage infrequent access and cold storage classes. Effective today, you’ll pay 19% less for Nearline Storage in Montréal, London and Frankfurt. And you’ll pay even less for Coldline—23% less in London and Frankfurt, and 15% lower in Sydney and Mumbai. And unlike some other competitive cloud storage offerings, you can access all Cloud Storage tiers via a single API with predictable latency. Try out Cloud Storage here.

What customers are saying


Canadian companies welcome the addition of GCP region in Montréal.

"The Montréal technology industry is full of great minds, ideas and talent, and Diagram Ventures is always looking for great local partners to support its mission to be a launchpad for Canadian success stories. Now that the Google Cloud Platform region is open in Montréal, our ventures can leverage the proximity of the network to get to market faster." 
— Marc-Antoine Ross, Chief Innovation Officer, Diagram
"Having worked with Google and their Cloud Platform across the world since 2014, we are very excited to have their world-class cloud services in our own neighbourhood. We continue to see huge growth in Canada and some requirement for local-only hosting and this will allow us to provide our Canadian customers with lower latency and highly available services." 
— Neil Cawse, Chief Executive Officer, Geotab
“At Ubisoft, we’re constantly looking at new ways to evolve gaming and incorporate the feedback of our players to introduce and build new services. Our Montréal team collaborates closely with Google to continue to test our ideas and bring them to life, so we are excited to see this new addition to the Montréal ecosystem with GCP’s new cloud region.“ 
— Thomas Belmont, Producer, Online Technology Group, Ubisoft

GCP partners in Canada


Partners in Canada are available to help design and support your deployment, migration and maintenance needs.
“Accenture has been named Google’s Partner of the Year for the past six years and, now with Google’s new cloud region in Canada, we can offer even stronger delivery capabilities on top of our industry expertise and collaborative work with Google. Accenture has helped many of our clients leverage the scale, security and cost effectiveness of Google Cloud.” 
— Bill Morris, Canada President and Senior Managing Director, Accenture
"We work with Google and customers in Canada and globally, to bring their desired transformations to life with cloud. Montréal’s new Google Cloud region will accelerate their journey, taking companies from idea to business impact faster and leveraging the power of Google’s network and Pythian’s expertise in cloud and data.” 
— Paul Vallée, CEO & Founder, Pythian Group

Premier Partners have completed extensive technical training and have strong expertise working with customers to ensure that no aspect of your next big project is left to chance. Premier Partners include: Accenture, PwC, Pythian Group, Onix Networking Canada, Cloudypedia

Technology Partners provide tools which integrate with our platform to extend its reach and functionality or use one of our services as a foundation for their products. Technology Partners include: RedHat, Pivotal, SAP, Cisco, Salesforce

Service Partners develop custom applications and provide managed services across the entire GCP stack. Service Partners include: Sourced Group, CloudOps, Nuvoola, Slalom, IMP Solutions, Six Factor, Zirro, Scalar, Tenzing, Linkbynet

Visit our partners page for more information.

Google Cloud in Canada


We’ve been investing in Canada for over 10 years with sales people across the country from Vancouver in the west to Montréal in the east, and every large city in between. Our staff of customer engineers, customer reliability engineers, support engineers and solution architects is here to help you build what’s next in Canada.

Getting started


For help migrating to GCP, please contact our local partners. For additional details on the Montréal region, please visit our Montréal region page, where you’ll get access to free resources, white papers, the "Cloud On-Air" on-demand video series and more. Our locations page provides updates on the availability of additional services and regions. Contact us to request early access to new regions and help us prioritize where we build next.

*Please visit our Service Specific Terms to get detailed information on our data storage capabilities.

GCP arrives in Canada with launch of Montréal region



Our fifteenth Google Cloud Platform region and first region in Canada is now open for you to build applications and store data, and promises to significantly improve latency for GCP customers and end users in the area.*

The new Montréal region, northamerica-northeast1, joins Oregon, Iowa, South Carolina and Northern Virginia in North America and makes it easier to build highly available, performant applications using resources across those geographies.

Hosting applications in the new region can improve latency by up to 90% for end users in Montréal, compared to hosting them in the closest region. Please visit www.gcping.com to see how fast Montréal is for yourself.

Services


The Montréal region has everything you need to build the next great application:
The region also has three zones, allowing you to distribute apps and storage across multiple zones and protect against service disruptions.

Interested in a GCP service that’s not available in the Canada region? No problem. You can access this service via the Google Network, the largest cloud network as measured by number of points of presence, and combine any of the services you deploy in Montréal with other GCP services around the world such as Data Loss Prevention, Cloud Spanner and BigQuery.

Google Cloud network


One of advantages of using Google Cloud is our global networking infrastructure. This private network provides a high-bandwidth, highly reliable, low-latency link to each region across the world. With it, you can reach the Montréal region as easily and as securely as, say, our São Paulo, Sydney or Tokyo regions. In addition, the global Google Cloud Load Balancing makes it easy to deploy truly global applications. For more information on Google's private network, visit Google Network Tiers.

And if you’re looking at hybrid deployments and require dedicated connections to Google Cloud, we provide two Dedicated Interconnect options in Montréal through Cologix.

New storage pricing


We're also launching reduced prices for Google Cloud Storage infrequent access and cold storage classes. Effective today, you’ll pay 19% less for Nearline Storage in Montréal, London and Frankfurt. And you’ll pay even less for Coldline—23% less in London and Frankfurt, and 15% lower in Sydney and Mumbai. And unlike some other competitive cloud storage offerings, you can access all Cloud Storage tiers via a single API with predictable latency. Try out Cloud Storage here.

What customers are saying


Canadian companies welcome the addition of GCP region in Montréal.

"The Montréal technology industry is full of great minds, ideas and talent, and Diagram Ventures is always looking for great local partners to support its mission to be a launchpad for Canadian success stories. Now that the Google Cloud Platform region is open in Montréal, our ventures can leverage the proximity of the network to get to market faster." 
— Marc-Antoine Ross, Chief Innovation Officer, Diagram
"Having worked with Google and their Cloud Platform across the world since 2014, we are very excited to have their world-class cloud services in our own neighbourhood. We continue to see huge growth in Canada and some requirement for local-only hosting and this will allow us to provide our Canadian customers with lower latency and highly available services." 
— Neil Cawse, Chief Executive Officer, Geotab
“At Ubisoft, we’re constantly looking at new ways to evolve gaming and incorporate the feedback of our players to introduce and build new services. Our Montréal team collaborates closely with Google to continue to test our ideas and bring them to life, so we are excited to see this new addition to the Montréal ecosystem with GCP’s new cloud region.“ 
— Thomas Belmont, Producer, Online Technology Group, Ubisoft

GCP partners in Canada


Partners in Canada are available to help design and support your deployment, migration and maintenance needs.
“Accenture has been named Google’s Partner of the Year for the past six years and, now with Google’s new cloud region in Canada, we can offer even stronger delivery capabilities on top of our industry expertise and collaborative work with Google. Accenture has helped many of our clients leverage the scale, security and cost effectiveness of Google Cloud.” 
— Bill Morris, Canada President and Senior Managing Director, Accenture
"We work with Google and customers in Canada and globally, to bring their desired transformations to life with cloud. Montréal’s new Google Cloud region will accelerate their journey, taking companies from idea to business impact faster and leveraging the power of Google’s network and Pythian’s expertise in cloud and data.” 
— Paul Vallée, CEO & Founder, Pythian Group

Premier Partners have completed extensive technical training and have strong expertise working with customers to ensure that no aspect of your next big project is left to chance. Premier Partners include: Accenture, PwC, Pythian Group, Onix Networking Canada, Cloudypedia

Technology Partners provide tools which integrate with our platform to extend its reach and functionality or use one of our services as a foundation for their products. Technology Partners include: RedHat, Pivotal, SAP, Cisco, Salesforce

Service Partners develop custom applications and provide managed services across the entire GCP stack. Service Partners include: Sourced Group, CloudOps, Nuvoola, Slalom, IMP Solutions, Six Factor, Zirro, Scalar, Tenzing, Linkbynet

Visit our partners page for more information.

Google Cloud in Canada


We’ve been investing in Canada for over 10 years with sales people across the country from Vancouver in the west to Montréal in the east, and every large city in between. Our staff of customer engineers, customer reliability engineers, support engineers and solution architects is here to help you build what’s next in Canada.

Getting started


For help migrating to GCP, please contact our local partners. For additional details on the Montréal region, please visit our Montréal region page, where you’ll get access to free resources, white papers, the "Cloud On-Air" on-demand video series and more. Our locations page provides updates on the availability of additional services and regions. Contact us to request early access to new regions and help us prioritize where we build next.

*Please visit our Service Specific Terms to get detailed information on our data storage capabilities.

White paper: Modernizing your .NET Application for Google Cloud



Last week, we published a “move-and-improve” white paper about rearchitecting a monolithic .NET application using microservices. Today, we introduced the next installment in our series about migration entitled “Modernizing your .NET Application for Google Cloud."

This paper dives into the details of the modernization roadmap. For example, in the previous white paper, we discussed using domain-driven design (DDD) to deconstruct a monolith into microservices. Now, we ramp up the fun with hands-on activities and real-world code samples. You'll be working through the modernization of critical functions, including authentication, database and caching:

  • Authentication - PetShop, our guinea pig application, currently uses the legacy ASP.NET Membership framework for authentication, which needs to be updated to support modern authentication protocols, such as OAuth and OIDC to better support mobile devices and applications. 
  • Database - PetShop uses Oracle for backend storage, but there are no technical reasons why we can’t use an open-source database like PostgreSQL instead. It may even be less expensive, and possess fewer licensing constraints for cloud computing. 
  • Caching - PetShop uses in-memory, on-server caching, but cloud-based systems can achieve better resiliency and performance by leveraging a distributed caching engine such as Redis.

By the time you finish with code samples in this white paper, you’ll have breathed new life into PetShop. By adopting technologies like Firebase, PostgreSQL and Redis, you’ll have offloaded a great deal of the maintenance and management to your service providers—in this case, GCP. For example, you’ll no longer have to maintain the ASP.NET Membership framework for authentication, as you’ll have outsourced it to Firebase. Get started today: download the white paper, and clone the GitHub repository.