Category Archives: Google Cloud Platform Blog

Product updates, customer stories, and tips and tricks on Google Cloud Platform

Guest post: Using Terraform to manage Google Cloud Platform infrastructure as code



Managing infrastructure usually involves a web interface or issuing commands in the terminal. These work great for individuals and small teams, but managing infrastructure in this way can be troublesome for larger teams with complex requirements. As more organizations migrate to the cloud, CIOs want hybrid and multi-cloud solutions. Infrastructure as code is one way to manage this complexity.

The open-source tool Terraform, in particular, can help you more safely and predictably create, change and upgrade infrastructure at scale. Created by HashiCorp, Terraform codifies APIs into declarative configuration files that can be shared amongst team members, edited, reviewed and versioned in the same way that software developers can with application code.

Here's a sample Terraform configuration for creating an instance on Google Cloud Platform (GCP):

resource "google_compute_instance" "blog" {
  name         = "default"
  machine_type = "n1-standard-1"
  zone         = "us-central1-a"

  disk {
    image = "debian-cloud/debian-8"
  }

  disk {
    type    = "local-ssd"
    scratch = true
  }

  network_interface {
    network = "default"
  }
}

Because this is a text file, it can be treated the same as application code and manipulated with the same techniques that developers have had for years, including linting, testing, continuous integration, continuous deployment, collaboration, code review, change requests, change tracking, automation and more. This is a big improvement over managing infrastructure with wikis and shell scripts!

Terraform separates the infrastructure planning phase from the execution phase. The terraform plan command performs a dry-run that shows you what will happen. The terraform apply command makes the changes to real infrastructure.

$ terraform plan
+ google_compute_instance.default
    can_ip_forward:                    "false"
    create_timeout:                    "4"
    disk.#:                            "2"
    disk.0.auto_delete:                "true"
    disk.0.disk_encryption_key_sha256: ""
    disk.0.image:                      "debian-cloud/debian-8"
    disk.1.auto_delete:                "true"
    disk.1.disk_encryption_key_sha256: ""
    disk.1.scratch:                    "true"
    disk.1.type:                       "local-ssd"
    machine_type:                      "n1-standard-1"
    metadata_fingerprint:              ""
    name:                              "default"
    self_link:                         ""
    tags_fingerprint:                  ""
    zone:                              "us-central1-a"


$ terraform apply
google_compute_instance.default: Creating...
  can_ip_forward:                    "" => "false"
  create_timeout:                    "" => "4"
  disk.#:                            "" => "2"
  disk.0.auto_delete:                "" => "true"
  disk.0.disk_encryption_key_sha256: "" => ""
  disk.0.image:                      "" => "debian-cloud/debian-8"
  disk.1.auto_delete:                "" => "true"
  disk.1.disk_encryption_key_sha256: "" => ""
  disk.1.scratch:                    "" => "true"
  disk.1.type:                       "" => "local-ssd"
  machine_type:                      "" => "n1-standard-1"
  metadata_fingerprint:              "" => ""
  name:                              "" => "default"
  network_interface.#:               "" => "1"
  network_interface.0.address:       "" => ""
  network_interface.0.name:          "" => ""
  network_interface.0.network:       "" => "default"
  self_link:                         "" => ""
  tags_fingerprint:                  "" => ""
  zone:                              "" => "us-central1-a"
google_compute_instance.default: Still creating... (10s elapsed)
google_compute_instance.default: Still creating... (20s elapsed)
google_compute_instance.default: Creation complete (ID: default)

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

This instance is now running on Google Cloud:
(click to enlarge)

Terraform can manage more that just compute instances. At Google Cloud Next, we announced support for GCP APIs to manage projects and folders as well as billing. With these new APIs, Terraform can manage entire projects and many of their resources.

By adding just a few lines of code to the sample configuration above, we create a project tied to our organization and billing account, enable a configurable number of APIs and services on that project and launch the instance inside this newly-created project.

resource "google_project" "blog" {
  name            = "blog-demo"
  project_id      = "blog-demo-491834"
  billing_account = "${var.billing_id}"
  org_id          = "${var.org_id}"
}

resource "google_project_services" "blog" {
  project = "${google_project.blog.project_id}"

  services = [
    "iam.googleapis.com",
    "cloudresourcemanager.googleapis.com",
    "cloudapis.googleapis.com",
    "compute-component.googleapis.com",
  ]
}

resource "google_compute_instance" "blog" {
  # ... 

  project = "${google_project.blog.project_id}" # <-- ...="" code="" new="" option="">

Terraform also detects changes to the configuration and only applies the difference of the changes.

$ terraform apply
google_compute_instance.default: Refreshing state... (ID: default)
google_project.my_project: Creating...
  name:        "" => "blog-demo"
  number:      "" => ""
  org_id:      "" => "1012963984278"
  policy_data: "" => ""
  policy_etag: "" => ""
  project_id:  "" => "blog-demo-491834"
  skip_delete: "" => ""
google_project.my_project: Still creating... (10s elapsed)
google_project.my_project: Creation complete (ID: blog-demo-491835)

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

We can verify the project is created with the proper APIs:
(click to enlarge)

And the instance exists inside this project.
This project + instance can be stamped out multiple times. Terraform can also create and export IAM credentials and service accounts for these projects.

By combining GCP’s new resource management and billing APIs and Terraform, you have more control over your organization's resources. With the isolation guaranteed by projects and the reproducibility provided by Terraform, it's possible to quickly stamp out entire environments. Terraform parallelizes as many operations as possible, so it's often possible to spin up a new environment in just a few minutes. And in larger organizations with rollup billing, IT teams can use Terraform to stamp out pre-configured environments tied to a single billing organization.

Use Cases

There are many challenges that can benefit from an infrastructure as code approach to managing resources. Here are a few that come to mind:

Ephemeral environments
Once you've codified an infrastructure in Terraform, it's easy to stamp out additional environments for development, QA, staging or testing. Many organizations pay thousands of dollars every month for a dedicated staging environment. Because Terraform parallelizes operations, you can curate a copy of production infrastructure in just one trip to the water cooler. Terraform enables developers to deploy their changes into identical copies of production, letting them catch bugs early.

Rapid project stamping
The new Terraform google_project APIs enable quick project stamping. Organizations can easily create identical projects for training, field demos, new hires, coding interviews or disaster recovery. In larger organizations with rollup billing, IT teams can use Terraform to stamp out pre-configured environments tied to a single billing organization.
On-demand continuous integration
You can use Terraform to create continuous integration or build environments on demand that are always in a clean state. These environments only run when needed, reducing costs and improving parity by using the same configurations each time.

Whatever your use case, the combination of Terraform and GCP’s new resource management APIs represents a powerful new way to manage cloud-based environments. For more information, please visit the Terraform website or review the code on GitHub.

Cloud Identity-Aware Proxy: Protect application access on the cloud



Whether your application is lift-and-shift or cloud-native, administrators and developers want to provide simple protected application access for only those corporate users that should have access to it.

At Google Cloud Next '17 last month, we launched Cloud Identity-Aware Proxy (Cloud IAP), which controls access to cloud applications running on Google Cloud Platform by verifying a user’s identity and determining whether that user is allowed to access the application.

Cloud IAP acts as the internet front end for your application, and you gain the benefits of group-based access control to your application and TLS termination and DoS protections from Google Cloud Load Balancer, which underlies Cloud IAP. Users and developers access the application as a public internet URL  no VPN clients to start up or manage.

With Cloud IAP, your developers can focus on writing custom code for their applications and deploy it to the internet with more protection from unauthorized access simply by selecting the application and adding users and groups to an access list. Google takes care of the rest.

How Cloud IAP works

As an administrator, you enable Cloud IAP protections by synchronizing your end-users’ identities to Google’s Cloud Identity Solution. You then define simple access policies for HTTPs web applications by selecting the users and groups who should be able to access them. Your developers, meanwhile, write and deploy HTTPs web applications to the internet behind Cloud Load Balancer, which passes incoming requests to Cloud IAP to perform identity checks and apply access policies. If the user is not yet signed-in, they're prompted to do so before the policy is applied.

Cloud IAP is ideal if you need a fast and reliable way to access your applications more securely. No more hiding behind walled gardens of VPNs. Take advantage of Cloud IAP and let developers do what they're good at, while giving security teams the peace of mind of increased protection of valuable enterprise data.

Cloud IAP is one of the suite of tools that enables you to implement the context-aware secure access described by Google’s BeyondCorp. You should also consider complementing Cloud IAP access control with phishing protection provided by our Security Key Management feature.

Cloud IAP pricing

Cloud IAP user- and group-based access control is available today at no cost. In the future, look for us to add features above and beyond controlling access based on users and groups. And stay tuned for further posts on getting started with Cloud IAP.

Automating project creation with Google Cloud Deployment Manager



Do you need to create a lot of Google Cloud Platform (GCP) projects for your company? Maybe the sheer volume or the need to standardize project creation is making you look for a way to automate project creation. We now have a tool to simplify this process for you.

Google Cloud Deployment Manager is the native GCP tool you can use to create and manage GCP resources, including Compute Engine (i.e., virtual machines), Container EngineCloud SQLBigQuery and Cloud Storage. Now, you can use Deployment Manager to create and manage projects as well.

Whether you have ten or ten thousand projects, automating the creation and configuration of your projects with Deployment Manager allows you to manage projects consistently. We have a set of templates that handle:
  • Project Creation - create the new project with the name you provide
  • Billing - set the billing account for the new project
  • Permissions - set the IAM policy on the project
  • Service Accounts - optionally create service accounts for the applications or services to run in this project
  • APIs - turn on compatible Google APIs that the services or applications in a project may need

Getting started

Managing project creation with Deployment Manager is simple. Here are few steps to get you started:
Download the templates from our github samples.

  1. The project creation samples are available in the Deployment Manager github repo under the project_creation directory. Or clone the whole DM github repo:

    git clone
    https://github.com/GoogleCloudPlatform/deploymentmanager-samples.git

    Then copy the templates under the examples/v2/project_creation directory.

  2. Follow the steps in the Readme in the project_creation directory. The readme includes detailed instructions, but there is one point to emphasize. You should create a new project using the Cloud Console that will be used as your “Project Creation” project. The service account under which Deployment Manager runs needs powerful IAM permissions to create projects and manage billing accounts, hence the recommendation to create this special project and use it only for creation of other projects.

  3. Customize your deployments.
    • At a minimum, you'll need to change the config.yaml file to add the name of the project you want to create, your billing account, the APIs you want, the IAM permissions you choose to use and the APIs to enable.
    • Advanced customization  you can do as little or as much as you want here. Let’s assume that your company typically has three types of projects: production service projects, test service projects and developer sandbox projects. These projects require vastly different IAM permissions, different types of service accounts and may also need different APIs. You could add a new top level template with a parameter for “project-type”. That parameter takes a string as input (such as “prodservice”, “testservice” or “developer”) and uses that value to customize the project for your needs. Alternatively, you can make three copies of the .yaml file  one for each project type with the correct settings for your three project types.

  4. Create your project.
    From the directory where you stored your templates, use the command line interface to run Deployment Manager:
    gcloud deployment-manager deployments create 
    <newproject_deployment> --config config.yaml --project <Project 
    Creation project>

    Where <newproject_deployment> is the name you want to give the deployment. This is not the new project name, that comes from the value in the config.yaml file. But you may want to use the same name for the deployment, or something similar so you know how they match up once you’ve stamped out a few hundred projects.
Now you know how to use Deployment Manager to automatically create and manage projects, not just GCP resources. Watch this space to learn more about how to use Deployment Manager, and let us know what you think of the feature. You can also send mail to [email protected].

The state of Ruby on Google Cloud Platform



At Google Cloud Next '17 last month we announced that App Engine flexible environment is now generally available. This brings the convenience of App Engine to Rubyists running Rails, Sinatra or other Rack based web frameworks.

One question we frequently get is, "Can I run gems like nokogiri or database adapters that have C extensions on App Engine?” and the answer is yes. We tested the top 1000 Ruby libraries, a.k.a., gems, to ensure that the necessary dependencies are available. We also tested common tools like paperclip that don't build against C libraries but require them at runtime. And we know that people are using different versions of Ruby and Rails; App Engine obeys .ruby-version and we support all currently supported versions of MRI. We've also tested the gems with Rails 3, Rails 4 and Rails 5. At Next we also announced that Postgres on Cloud SQL is in beta. All of these things should make it easier to move your Rails and Sinatra applications to App Engine. More info on using Ruby on Google Cloud Platform (GCP) is available at http://cloud.google.com/ruby.

New gems on tap

We also have three gems that have reached general availability for the following products: Stackdriver Logging, Google Cloud Datastore and Google Cloud Storage. In addition there are three gems currently in beta for Google BigQuery, Google Cloud Translation API and Google Cloud Vision API. Our philosophy when working on the gems has been to embrace the Ruby ethos that programming should be fun. We try to make our gems idiomatic and make sense to Rubyists. For example, our logging library provides a drop-in replacement for the standard Ruby logger:

require "google/cloud/logging"

logging = Google::Cloud::Logging.new
logger = logging.logger "my_app_log", resource, env: :production

logger.info "Job started"
logger.info { "Job started" }
logger.debug?

With the Cloud Datastore gem, creating entities is similar to creating tables using ActiveRecord. And with Cloud Storage, you can upload files or you can upload Ruby IO Objects. Using our products should not add significant cognitive load to your development tasks. And having a philosophy of "By Rubyists for Rubyists" makes that easier to do.

RailsConf

If you want to try out some of these libraries or spin up an application on App Engine, come find us at RailsConf 2017 in Phoenix, Arizona later this month. We're proud to be a Gold sponsor again this year. Based on feedback from last year, we're making our booth more interactive with codelabs, demos and of course even more stickers.

We also have three folks from the Google Ruby team giving talks. Daniel Azuma's talk, "What's my app really doing in production" will show you tools and tricks to instrument and debug misbehaving apps. Remi Taylor's talk, "Google Cloud >3 Ruby," will teach you about all the different tools we have for Ruby developers. Finally, in my talk, "Syntax isn't everything: NLP for Rubyists," I use the Google Cloud Natural Language API library and some stupid Ruby tricks to introduce you to natural language processing. If you'll be at RailsConf we really hope you'll come say hi.

Google Cloud Storage introduces Cloud Pub/Sub notifications



Google Cloud Storage has always been a high-performance and cost-effective place to store data objects. Now it’s also easy to build workflows around those objects that are triggered by creating or deleting them, or changing their metadata.

Suppose you want to take some action every time a change occurs in one of your Cloud Storage buckets. You might want to automatically update sales projections every day when sales uploads its new daily totals. You might need to remove a resource from a search index when an object is deleted. Or perhaps you want to update the thumbnail when someone makes a change to an image. The ability to respond to changes in a Cloud Storage bucket gives you increased responsiveness, control and flexibility.

Cloud Pub/Sub Support


We’re pleased to announce that Cloud Storage can now register changes by sending change notifications to a Google Cloud Pub/Sub topic. Cloud Pub/Sub is a powerful messaging platform that allows you to build fast, reliable and more secure messaging solutions. Cloud Pub/Sub support introduces many new capabilities to Cloud Storage notifications, such as pulling from subscriptions instead of requiring users to configure webhooks, multiplexing copies of each message to many subscribers and filtering messages by event type or prefix.
You can get started sending Cloud Storage notifications to Cloud Pub/Sub by reading our getting started guide. Once you’ve enabled the Cloud Pub/Sub API and downloaded the latest version of the gcloud SDK, you can set up notification triggers from your Cloud Storage bucket to your Cloud Pub/Sub topic with the following command:

$> gsutil notification create -f json -t your-topic gs://your-bucket

From that point on, any changes to the contents of your Cloud Storage bucket trigger a message to your Cloud Pub/Sub topic. You can then create Cloud Pub/Sub subscriptions on that topic and pull messages from those subscriptions in your programs, like in this example Python app.

Cloud Functions

Cloud Pub/Sub is a powerful and flexible way to respond to changes in a bucket. However, for some tasks you may prefer the simplicity of deploying a small, serverless function that just describes the action you want to take in response to a change. For that, Google Cloud Functions supports Cloud Storage triggers.

Cloud Functions is a quick way to deploy cloud-based scripts in response to a wide variety of events, for example an HTTP request to a certain URL, or a new object in a Cloud Storage bucket.

Once you get started with Google Cloud Functions, you can learn about setting up a Cloud Storage Trigger for your function. It’s as simple as adding a “--trigger-bucket” parameter to your deploy function:

$> gcloud beta functions deploy helloWorld --stage-bucket cloud-functions --trigger-bucket your-bucket

It’s fun to think about what’s possible when Cloud Storage objects aren’t just static entities, but can trigger a wide variety of tasks. We hope you’re as excited as we are!

Stay up to speed with Cloud Launcher: more production-grade solutions, same easy-to-use service



We created Cloud Launcher to help make sure you can easily discover new software and services, whether it’s a small internal tool or a large-scale enterprise application. We’re excited to share several new additions to this catalog and introduce an even easier way to try them out.

Cloud Launcher Virtual Machine solutions are now a part of the new Always Free program. This allows you to test and develop with participating products at no cost up to this program’s limits. With sustained use discounts, free trial credits you can use for 12 months, custom machine shapes and now the Always Free program, there has never been a better time to try out Launcher solutions.

Here are a few areas where we’ve made updates to the Cloud Launcher catalog:

  • Expanded VM solutions library: We now have even more solutions running within virtual machines, ranging from big data analytics to databases.
  • Bring Your Own License (BYOL): You asked, we answered. Cloud Launcher now supports BYOL for many solutions, allowing you to use Cloud Launcher as a deployment vehicle for your existing licenses.
  • Standalone SaaS solutions: Now, you can sign up for services directly from our SaaS partner sites. Over 15 services are now available via Cloud Launcher, with many more on the horizon.

Missed us at Google Cloud Next ‘17? Learn how you can accelerate your application development with Cloud Launcher.

Read on to learn more about specific additions to the Cloud Launcher program, or try them out for yourself.

New VM solutions

Cloud Launcher VM solutions offer scale, performance and value that allow you to easily launch large compute clusters on Google's infrastructure.



SAP HANA: in-memory Platform for Business Digital Transformation

NodeSource: monitoring Node.js at Scale

Check Point: confidently extend advanced security to the public cloud

AppScale: open source Google App Engine

DataStax Enterprise: distributed database based on Apache Cassandra

Looker: Looker for Big Data - 25 Users: Make every petabyte of data accessible to your company.

MongoDB with Replication: NoSQL document-oriented database for content-driven applications

SUREedge Migrator: any application, any data, any source to Google Cloud

Zoomdata: big data visual analytics

New BYOL solutions


BYOL (Bring Your Own License) solutions let you run software on Google Compute Engine, using licenses you’ve purchased directly from third-party providers.




Barracuda: next generation firewall for distributed enterprises

Check Point: confidently extend advanced security to the public cloud

CloudBolt: self-service multi-cloud VM provisioning for your developers



New SaaS solutions


Browse managed services in Cloud Launcher—then purchase the solution directly on the provider’s site.




Aiven.io Services: Aiven is a next-generation managed cloud service hosting for your software infrastructure services.

Apigee Edge: intelligent API management: manage, secure, scale and analyze APIs

AppDynamics: business and application performance monitoring

ClearDB: databases made easy

ClicData dashboards: dashboards made easy

Cloudflare: performance and security solution for websites and applications

CrowdStrike Falcon: next generation endpoint protection for Google Cloud Platform

Datadog: monitor your entire Google Cloud Infrastructure

Dome9: verifiable security and compliance features for every public cloud

Fastly: Fastly is a content delivery network (CDN) that focuses on helping companies deliver dynamic content to their users faster.

Imperva Incapsula: application delivery and enterprise grade security from the Cloud

JFrog Artifactory: universal artifact repository

Kinvey: Kinvey is a leading HIPAA compliant mobile Backend as a Service  (Kinvey BaaS).

NetSkope: understand activities, protect sensitive data and mitigate risk

NewRelic: get code-level visibility for all your production apps

Premium WordPress: WordPress digital experience platform

Reblaze: superior web security

Segment: collect all of your customer data and send it anywhere

xPlenty: data integration cloud service

Wix Media Platform: the smartest way to host and deliver your media worldwide

Cloud Translation API Neural Machine Translation enters GA and adds more languages



Google introduced the paid version of Translate API in 2011, with the support of 50+ languages. Since then, we've continuously invested in the API by improving service scalability and expanding it to cover 100+ languages today. As a result, the Cloud Translation API has been widely adopted and deployed in scaled production environments by thousands of customers in the travel, finance and gaming verticals.

As part of Google’s continued investment in machine translation, we recently announced the beta launch of our Google Neural Machine Translation system (GNMT) that uses state-of-the-art training techniques and runs on TPUs to achieve some of the largest improvements for machine translation of the past decade. We had over 1,000 customers sign up to test the API and provide us valuable feedback. For example, Grani VR Studio uses the high accuracy and low latency offered by neural machine translation to build interactive VR/AR experiences in different languages.


Today we're pleased to announce the general availability of the neural machine translation system to all our customers under the Standard Edition. The Premium Edition beta is now closed for new sign-ups and will re-open in the coming months as we roll out new features.

Here’s what you get with Neural Machine Translation:
  • Access to the highest-quality translation model, reducing translation errors by 55%-85% on several generally available language pairs
  • Support for seven new languages: English to and from Russian, Hindi, Vietnamese, Polish, Arabic, Hebrew and Thai. This is in addition to eight existing languages (English to and from Chinese, French, German, Japanese, Korean, Portuguese, Spanish and Turkish)
  • More languages in coming weeks. Please visit this page to keep track of new language support.
Standard Edition customers paying the list online price can access the neural translation system at no additional charge. As part of the announcement, we're also offering offline discounted pricing for usage of more than one billion characters per month. Please visit our pricing page for more information.

We look forward to working with you as we continue to invest in bringing the best of Google technology to serve your translation needs.

Solution guide: Migrating your dedicated game servers to Google Cloud Platform



One of the greatest challenges for game developers is to accurately predict how many players will attempt to get online at the game's launch. Over-estimate, and risk overspending on hardware or rental commitments. Under-estimate, and players leave in frustration, never to return. Google Cloud can help you mitigate this risk while giving you access to the latest cloud technologies. Per-minute billing and automatically applied sustained use discounts can take the pain out of up-front capital outlays or trying to play catch-up while your player base shrinks.

The advantages for handling spikey launch-day demand are clear, but Google Cloud Platform's extensive network of regions also puts servers near high-latency customers. Game studios no longer need to do an expensive datacenter buildout to offer a best-in-class game experience  just request Google Compute Engine resources where they're needed, when they're needed. With new regions coming online every year, you can add game servers near your players with a couple of clicks.

We recently published our "Dedicated Game Server Migration Guide" that outlines Google Cloud Platform’s (GCP) many advantages and differentiators for gaming workloads, and best practices for running these processes that we've learned working with leading studios and publishers. It covers the whole pipeline, from creating projects and getting your builds to the cloud, to distributing them to your VMs and running them, to deleting environments wholesale when they're no longer needed. Running game servers in Google Cloud has never been easier.

Quantifying the performance of the TPU, our first machine learning chip



We’ve been using compute-intensive machine learning in our products for the past 15 years. We use it so much that we even designed an entirely new class of custom machine learning accelerator, the Tensor Processing Unit.

Just how fast is the TPU, actually? Today, in conjunction with a TPU talk for a National Academy of Engineering meeting at the Computer History Museum in Silicon Valley, we’re releasing a study (this paper will be available from arXiv.org at 5pm PT today) that shares new details on these custom chips, which have been running machine learning applications in our data centers since 2015. This first generation of TPUs targeted inference (the use of an already trained model, as opposed to the training phase of a model, which has somewhat different characteristics), and here are some of the results we’ve seen:
  • On our production AI workloads that utilize neural network inference, the TPU is 15x to 30x faster than contemporary GPUs and CPUs.
  • The TPU also achieves much better energy efficiency than conventional chips, achieving 30x to 80x improvement in TOPS/Watt measure (tera-operations [trillion or 1012 operations] of computation per Watt of energy consumed).
  • The neural networks powering these applications require a surprisingly small amount of code: just 100 to 1500 lines. The code is based on TensorFlow, our popular open-source machine learning framework.
  • More than 70 authors contributed to this report. It really does take a village to design, verify, implement and deploy the hardware and software of a system like this.
The need for TPUs really emerged about six years ago, when we started using computationally expensive deep learning models in more and more places throughout our products. The computational expense of using these models had us worried. If we considered a scenario where people use Google voice search for just three minutes a day and we ran deep neural nets for our speech recognition system on the processing units we were using, we would have had to double the number of Google data centers!

TPUs allow us to make predictions very quickly, and enable products that respond in fractions of a second. TPUs are behind every search query; they power accurate vision models that underlie products like Google Image Search, Google Photos and the Google Cloud Vision API; they underpin the groundbreaking quality improvements that Google Translate rolled out last year; and they were instrumental in Google DeepMind's victory over Lee Sedol, the first instance of a computer defeating a world champion in the ancient game of Go.

We’re committed to building the best infrastructure and sharing those benefits with everyone. We look forward to sharing more updates in the coming weeks and months.

Google Container Engine fires up Kubernetes 1.6



Today we started to make Kubernetes 1.6 available to Google Container Engine customers. This release emphasizes significant scale improvements and additional scheduling and security options, making the running of a Kubernetes clusters on Container Engine easier than ever before.

There were over 5,000 commits in Kubernetes 1.6 with dozens of major updates that are now available to Container Engine customers. Here are just a few highlights from this release:
  • Increase in number of supported nodes by 2.5 times: We’ve made great effort to support your workload no matter how large your needs. Container Engine now supports cluster sizes of up to 5,000 nodes, up from 2,000, while still maintaining our strict SLO for cluster performance. We've already had some of the world's most popular apps hosted on Container Engine (such as Pokémon GO) and the increase in scale can handle more of the largest workloads.
  • Fully Managed Nodes: Container Engine has always helped keep your Kubernetes master in a healthy state; we're now adding the option to fully manage your Kubernetes nodes as well. With Node Auto-Upgrade and Node Auto-Repair, you can optionally have Google automatically update your cluster to the latest version, and ensure your cluster’s nodes are always operating correctly. You can read more about both features here.
  • General Availability of Container-Optimized OS: Container Engine was designed to be a secure and reliable way to run Kubernetes. By using Container-Optimized OS, a locked down operating system specifically designed for running containers on Google Cloud, we provide a default experience that's more secure, highly performant and reliable, helping ensure your containerized workloads can run great. Read more details about Container-Optimized OS in this in-depth post here.
Over the past year, Kubernetes adoption has accelerated and we could not be more proud to host so many mission critical applications on the platform for our customers. Some recent highlights include:

Customers

  • eBay uses Google Cloud technologies including Container Engine, Cloud Machine Learning and AI for its ShopBot, a personal shopping bot on Facebook Messenger.
  • Smyte participated in the Google Cloud startup program and protects millions of actions a day on websites and mobile applications. Smyte recently moved from self-hosted Kubernetes to Container Engine.
  • Poki, a game publisher startup, moved to Google Cloud Platform (GCP) for greater flexibility, empowered by the openness of Kubernetes. A theme we covered at our Google Cloud Next conference, showing that open source technology gives customers the freedom to come and go as they choose. Read more about their decision to switch here.
While Kubernetes did nudge us in the direction of GCP, we’re more cloud agnostic than ever because Kubernetes can live anywhere.”  — Bas Moeys, Co-founder and Head of Technology at Poki

To help shape the future of Kubernetes — the core technology Container Engine is built on — join the open Kubernetes community and participate via the kubernetes-users-mailing list or chat with us on the kubernetes-users Slack channel.

We’re the first cloud to offer users the newest Kubernetes release, and with our generous 12 month free trial of $300 credits, it’s never been simpler to get started, try the latest release today.