Category Archives: Google Cloud Platform Blog

Product updates, customer stories, and tips and tricks on Google Cloud Platform

Cloud SQL for PostgreSQL: Managed PostgreSQL for your mobile and geospatial applications in Google Cloud



At Google Cloud Next ‘17, we announced support for PostgreSQL as part of Google Cloud SQL, our managed database service. With its extensibility, strong standards compliance and support from a vibrant open-source community, Postgres is the database of choice for many developers, especially for powering geospatial and mobile applications. Cloud SQL already supports MySQL, and now, PostgreSQL users can also let Google take care of mundane database administration tasks like applying patches and managing backups and storage capacity, and focus on developing great applications.

Feature highlights

Storage and data protection
  • Flexible backups: Schedule automatic daily backups or run them on-demand.
  • Automatic storage increase: Enable automatic storage increase and Cloud SQL will add storage capacity whenever you approach your limit.

Connections
  • Open standards: We embrace the PostgreSQL wire protocol (the standard connection protocol for PostgreSQL databases) and SSL, so you can access your database from nearly any application, running anywhere.
  • Security features: Our Cloud SQL Proxy creates a local socket and uses OAuth to help establish a secure connection with your application or PostgreSQL tool. It automatically creates the SSL certificate and makes more secure connections easier for both dynamic and static IP addresses.

Extensibility
  • Geospatial support: Easily enable the popular PostGIS extension for geospatial objects in Postgres.
  • Custom instance sizes: Create your Postgres instances with the optimal amount of CPU and memory for your workloads


Create Cloud SQL for PostgreSQL instances customized to your needs.


More features coming soon

We’re continuing to improve Cloud SQL for PostgreSQL during beta. Watch for the following:

  • Automatic failover for high availability
  • Read replicas
  • Additional extensions
  • Precise restores with point-in-time recovery
  • Compliance certification as part of Google’s Cloud Platform BAA

Case study: Descartes Labs delves into Earth’s resources with Cloud SQL for PostgreSQL

Using deep-learning to make sense of vast amounts of image data from Google Earth Engine, NASA, and other satellites, Descartes Labs delivers invaluable insights about natural resources and human population. They provide timely and accurate forecasts on such things as the growth and health of crops, urban development, the spread of forest fires and the availability of safe drinking water across the globe.

Cloud SQL for PostgreSQL integrates seamlessly with the open-source components that make up Descartes Labs’ environment. Google Earth Engine combines a multi-petabyte catalog of satellite imagery and geospatial datasets with planetary-scale analysis capabilities and makes it available for scientists, researchers and developers to detect changes, map trends and quantify differences on the Earth's surface. With ready-to-use data sets and an API, Earth Engine data is core to Descartes Labs’ product. Combining this with NASA data and the popular OpenStreetMap data, Descartes Labs takes full advantage of the open source community.

Descartes Labs’ first application tracks corn crops based on a 13-year historical backtest. It predicts the U.S. corn yield faster and more accurately than the U.S. Department of Agriculture.
click to enlarge

Descartes adopted Cloud SQL for PostgreSQL early on because it allowed them to focus on developing applications rather than on mundane database management tasks. “Cloud SQL gives us more time to work on products that provide value to our customers,” said Tim Kelton, Descartes Labs Co-founder and Cloud Architect. “Our individual teams, who are building micro services, can quickly provision a database on Cloud SQL. They don't need to bother compiling Geos, Proj4, GDAL, and Lib2xml to leverage PostGIS. And when PostGIS isn’t needed, our teams use PostgreSQL without extensions or MySQL, also supported by Cloud SQL.”

According to Descartes Labs, Google Cloud Platform (GCP) is like having a virtual supercomputer on demand, without all the usual space, power, cooling and networking issues. Cloud SQL for PostgreSQL is a key piece of the architecture that backs the company’s satellite image analysis applications.
click to enlarge
In developing their newest application, GeoVisual Search, the team benefited greatly from automatic storage increases in Cloud SQL for PostgreSQL. “Ever tried to estimate how a compressed 54GB XML file will expand in PostGIS?” Tim Kelton asked. “It’s not easy. We enabled Cloud SQL’s automatic storage increase, which allows the disk to start at 10GB and, in our case, automatically expanded to 387GB. With this feature, we don’t waste money or time by under- or over-allocating disk capacity as we would on a VM.”
click to enlarge
Because the team was able to focus on data models rather than on database management, development of the GeoVisual Search application proceeded smoothly. Descartes’ customers can now find the geospatial equivalent of a needle in a haystack: specific objects of interest in map images.

The screenshot below shows a search through two billion map tiles to find wind turbines.
click to enlarge
Tim’s parting advice for startups evaluating cloud solutions: “Make sure the solution you choose gives you the freedom to experiment, lets your team focus on product development rather than IT management and aligns with your company’s budget.”

See what GCP can do for you


Sign up for a $300 credit to try Cloud SQL and the rest of GCP. Start with inexpensive micro instances for testing and development. When you’re ready, you can easily scale them up to serve performance-intensive applications. As a bonus, everyone gets the 100% sustained use discount during beta, regardless of usage.

Our partner ecosystem can help you get started with Cloud SQL for PostgreSQL. To streamline data transfer, reach out to Alooma, Informatica, Segment, Stitch, Talend and Xplenty. For help with visualizing analytics data, try ChartIO, iCharts, Looker, Metabase and Zoomdata.
"PostgreSQL is one of Segment’s most popular database targets for our Warehouses product. Analysts and administrators appreciate its rich set of OLAP features and the portability they’re ensured by it being open source. In an increasingly “serverless” world, Google’s Cloud SQL for PostgreSQL offering allows our customers to eschew costly management and operations of their PostgreSQL instance in favor of effortless setup, and the NoOps cost and scaling model that GCP is known for across their product line."   Chris Sperandio, Product Lead, Segment
"At Xplenty, we see steady growth of prospects and customers seeking to establish their data and analytics infrastructure on Google Cloud Platform. Data integration is always a key challenge, and we're excited to support both Google Cloud Spanner and Cloud SQL for PostgreSQL both as data sources as well as targets, to continue helping companies integrate and prepare their data for analytics. With the robustness of Cloud Spanner and the popularity of PostgreSQL, Google continues to innovate and prove it is a world leader in cloud computing."   Saggi Neumann, CTO, Xplenty

No matter how far we take Cloud SQL, we still feel like we’re just getting started. We hope you’ll come along for the ride.


Crash exploitability analysis on Google Cloud Platform: security in plaintext




When an application or service crashes, do you wonder what caused the crash? Do you wonder if the crash poses a security risk?

An important element in platform hardening is properly handling server process crashes. When a process crashes unexpectedly, it suggests there may be a security problem an attacker could exploit to compromise a service. Even highly reliable user-facing services can depend on internal server processes that crash. At Google, we collect crashes for analysis and automatically flag and analyze those with potential security implications.

Security vulnerabilities in crashes

Analyzing crashes is a widespread security practice — this is why, when you run Google Chrome, you’re asked if it’s okay to send data about crashes back to the company.

At Google Cloud Platform (GCP), we monitor for crashes in the processes that manage customer VMs and across our services, using standard processes to protect customer data in GCP.

There are many different security issues that can cause a crash. One well-known example is a use-after-free vulnerability. A use-after-free vulnerability occurs when you attempt to use a region of memory that’s already been freed.

Most of the time, a use-after-free action simply causes the program to crash. However, if an attacker has the ability to properly manipulate memory, there’s the potential for them to exploit the vulnerability and gain arbitrary code execution capabilities.

One recent example of a use-after-free was CVE-2016-5177. In this instance, a use-after-free was found by an external researcher in the V8 JavaScript Engine used by Chrome. The issue was fixed in a September 2016 release of Chrome.

Analysis tactics

Debugging a single crash can be difficult. But how do you handle debugging crashes when you have to manage thousands of server jobs?

In order to help secure a set of rapidly evolving products such as Google Compute Engine, Google App Engine and the other services that comprise GCP, you need a way to automatically detect problems that can lead to crashes.

In Compute Engine’s early days, when we had a much smaller fleet of virtual machines running at any given time, it was feasible for security engineers to analyze crashes by hand.

We would load crash dumps into gdb and look at the thread that caused a crash. This provided detailed insight into the program state prior to a crash. For example, gdb allows you to see whether a program is executing from a region of memory marked executable. If it’s not, you may have a security issue.

Analyzing crashes in gdb worked well, but as Cloud grew to include more services and more users, it was no longer feasible for us to do as much of this analysis by hand.

Automating analysis

We needed a way to automate checking crashes for use-after-free vulnerabilities and other security issues. That meant integrating with the systems used to collect crash data across Google, and running an initial set of signals against a crash to either flag it as a security problem to be fixed or that required further analysis.

Automating this triage was important, because crashes can occur for many reasons and may not pose a security threat. For instance, we expect to see many crashes just from routine stress testing. If, however, a security problem is found, we automatically file a bug that details the specific issue and assigns it an exploitability rating.

Always evolving

Maintaining a platform with high security standards means going up against attackers who are always evolving, and we're always working to improve in turn.

We're continually improving our crash analysis to automatically detect more potential security problems, better determine the root cause of a crash and even identify required fixes.

Digging deep on PHP 7.1 for Google App Engine



Developers love to build web applications and APIs with PHP, and we were delighted to announce last week at Google Cloud Next ‘17 that PHP 7.1 is available on Google App Engine. App Engine is our easy-to-use platform for building, deploying, managing and automatically scaling services on Google’s infrastructure. The PHP 7.1 runtime is available on the App Engine flexible environment, and is currently in beta.

Getting started


To help you get started with PHP on App Engine, we’ve built a collection of getting started guides, samples, codelabs and interactive tutorials that walk through creating your code, using our APIs and services, and deploying to production.

When running PHP on App Engine, you can use the tools and databases you already know and love, including Laravel, Symfony, Wordpress, or any other web framework. You can also use MongoDB, MySQL, or Cloud Datastore to store your data. And while the runtime is flexible enough to manage most applications and services, if you want more control over the underlying infrastructure, you can easily migrate to Google Container Engine or Google Compute Engine.

Deploying to App Engine on PHP 7.1


To deploy a simple application to App Engine on PHP 7.1, download and install the Google Cloud SDK. Once you’ve done this, run the following commands:

echo "<?php echo 'Hello, World';"> index.php
gcloud app deploy

This generates an app.yaml with the following values:

env: flex
runtime: php
runtime_config:
  document_root: .

Once the application is deployed, you can view it in the browser, or go to the Cloud Console to view the running instances.

Installing dependencies


For dependency management, we recommend using Composer. With it, dependencies declared in composer.json are automatically installed when deployed to App Engine Flexible Environment. In addition, it uses the PHP version specified in composer.json in your deployment.

composer require "php:7.1.*" --ignore-platform-reqs

Using Google’s APIs and services


Using the Google Cloud client library, you can take advantage of our advanced APIs and services such as our scalable NoSQL database Google Cloud Datastore, Google Cloud Pub/Sub, and Google BigQuery. To use the Google Cloud client library, install the code using Composer (this example assumes composer is installed globally):

composer require google/cloud


This creates a file composer.json with the most recent version of Google Cloud PHP (currently 0.24.0).

{
    "require": {
        "google/cloud": "^0.24.0"
    }
}


App Engine detects the project ID of the instance and authenticates using the App Engine service account. That means you can run, say, a BigQuery query with a few lines of code, with no additional authentication! For example, add the following code to index.php to call BigQuery:

<?php
require_once __DIR__ . '/vendor/autoload.php';
$client = new Google\Cloud\BigQuery\BigQueryClient();
$query = 'SELECT TOP(corpus, 10) as title, COUNT(*) as unique_words ' .
         'FROM [publicdata:samples.shakespeare]';
$queryResults = $client->runQuery($query);
foreach ($queryResults->rows() as $result) {
    print($result['title'] . ': ' . $result['unique_words'] . PHP_EOL);
}


Add this to a directory with the above composer.json file, and deploy it to App Engine flexible environment:

gcloud app deploy 
gcloud app browse

The second command will open your browser window to your deployed project, and you will see a printed list of BigQuery results!

Use your favorite framework


The PHP community uses a myriad of frameworks. We have code samples for setting up applications in Laravel, Symfony, Drupal, Wordpress, and Silex, as well as a Wordpress plugin that integrates with Google Cloud Storage. Keep an eye on the tutorials page as we add more frameworks and libraries, and be sure to create an issue for any tutorials you’d like to see.

Commitment to PHP and open source


At Google, we’re committed to open source. As such, the new core PHP Docker runtime, google-cloud composer package and Google API client are all open source:


We’re thrilled to welcome PHP developers to Google Cloud Platform, and we’re committed to making further investments to help make you as productive as possible. This is just the start -- stay tuned to the blog and our GitHub repositories to catch the next wave of PHP support on GCP.

We can’t wait to hear from you. Feel free to reach out to us on Twitter @googlecloud, or request an invite to the Google Cloud Slack community and join the #PHP channel.

Discover and redact sensitive data with the Data Loss Prevention API



Last week at Google Cloud Next '17, we introduced a number of security enhancements across Google Cloud, including the Data Loss Prevention API. Like many Google Cloud Platform (GCP) products, the DLP API began its life as an internal product used in development and support workflows. It also uses the same codebase as DLP on Gmail and Drive.

Now in beta, the DLP API gives GCP users access to a classification engine that includes over 40 predefined content templates for credit card numbers, social security numbers, phone numbers and other sensitive data. Users send the API textual data or images and get back metadata such as likelihood and offsets (for text) and bounding boxes (for images).


Be smart with your data

The DLP API helps you minimize what data you collect, expose or copy. For example, it can be used to automatically classify or redact sensitive data from a text stream before you write it to disk, generate logs or perform analysis. Use it to alert users before they save sensitive data in an application or triage content to the right storage system or user based on the presence of sensitive content.


Your data is your most critical asset

The DLP API helps you to manage and run analytics on cloud data, without introducing additional risk to your organization. Pre-process with the DLP API, then analyze trends in Google BigQuery, understand context with Google Cloud Natural Language API and run predictive models with Cloud Machine Learning Engineall on redacted textual content.

Try the DLP API out here with our demo application. Watch as it detects credit card numbers based on pattern formatting, contextual information and checksum.
To find out more and get started, visit the DLP API product page.

Cloud KMS GA, new partners expand encryption options



As you heard at Google Cloud Next ‘17, our Cloud Key Management Service (KMS) is now generally available. Cloud KMS makes it even easier for you to encrypt data at scale, manage secrets and protect your data the way you want  both in the cloud and on-premise. Today, we’re also announcing a number of partner options for using Customer-Supplied Encryption Keys.

Cloud KMS is now generally available.

With Cloud KMS, you can manage symmetric encryption keys in a cloud-hosted solution, whether they’re used to protect data stored in Google Cloud Platform (GCP) or another environment. You can create, use, rotate and destroy keys via our Cloud KMS API, including as part of a secret management or envelope encryption solution. Further, Cloud KMS is directly integrated with Cloud Identity Access Management and Cloud Audit Logging for greater control over your keys.

As we move out of beta, we’re introducing an availability SLA, so you can count on Cloud KMS for your production workloads. We’ve load tested Cloud KMS extensively, and reduced latency so that Cloud KMS can sit in the serving path of your requests.

Ravelin, a fraud detection provider, has continued their use of Cloud KMS to encrypt secrets stored locally, including configurations and authentication credentials, used for both customer transactions and internal systems and processes. Using Cloud KMS allows Ravelin to easily encrypt these secrets for storage.
“Encryption is absolutely critical to any company managing their own systems, transmitting data over a network or storing sensitive data, including sensitive system configurations. Cloud KMS makes it easy to implement best practices for secret management, and its low latency allows us to use it for protecting frequently retrieved secrets. Cloud KMS gives us the cryptographic tools necessary to protect our secrets, and the features to keep encryption practical.”  Leonard Austin, CTO at Ravelin. 

Managing your secrets in Google Cloud


We’ve published recommendations on how to manage your secrets in Google Cloud. Most development teams have secrets that they need to manage at build or run time, such as API keys. Instead of storing those secrets in source code, or in metadata, for many cases we suggest you store secrets encrypted at rest in a Google Cloud Storage bucket, and use Cloud KMS to encrypt those secrets at rest.

Customer-Supplied Encryption Key partners


You now have several partner options for using Customer-Supplied Encryption Keys. Customer-Supplied Encryption Keys (or CSEK, available for Google Cloud Storage and Compute Engine) allow you to provide a 256-bit string, such as an AES encryption key, to protect your data at rest. Typically, customers use CSEK when they have stricter regulatory needs, or need to provide their own key material.

To simplify the use of this unique functionality, our partners Gemalto, Ionic, KeyNexus, Thales and Virtru, can generate CSEK keys in the appropriate format. These partners make it easier to generate an encryption key for use with CSEK, and to associate that key to an object in Cloud Storage or a persistent disk, image or instance in Compute Engine. Each partner brings differentiated features and value to the table, which they describe in their own words below.

Gemalto
“Gemalto is dedicated to multi-cloud enterprise key management by ensuring customers have the best choices to maintain high assurance key ownership and control as they migrate operations, workloads and data to the cloud. Gemalto KeySecure has supported Client-Side Encryption with Google Cloud Storage for years, and is now extending support for Customer Supplied Encryption Keys (CSEK)." Todd Moore SVP of Encryption Products at Gemalto

Ionic
"We are excited to announce the first of many powerful capabilities leveraging Google's Customer Supplied Encryption Keys (CSEK). Our new Ionic Protect for Cloud Storage solution enables developers to simply and seamlessly use their own encryption keys with the full capabilities of the Ionic platform while natively leveraging Google Cloud Storage.”  Adam Ghetti, Founder and CEO of Ionic

KeyNexus
"KeyNexus helps customers supply their own keys to encrypt their most sensitive data across Google Cloud Platform as well as hundreds of other bring-your-own-key (BYOK) use cases spanning SaaS, IaaS, mobile and on-premise, via secure REST APIs. Customers choose KeyNexus as a centralized, platform-agnostic, key management solution which they can deploy in numerous highly available, scalable and low latency cloud or on-premise configurations. Using KeyNexus, customers are able to supply keys to encrypt data server-side using Customer-Supplied Encryption Keys (CSEKs) in Google Cloud Storage and Google Compute Engine"  Jeff MacMillan, CEO of KeyNexus

Thales
“Protected by FIPS 140-2 Level 3 certified hardware, the Thales nShield HSM uses strong methods to generate encryption keys based on its high-entropy random number generator. Following generation, nShield exports customer keys into the cloud for one-time use via Google’s Customer-Supplied Encryption Key functionality. Customers using Thales nShield HSMs and leveraging Google Cloud Platform can manage their encryption keys from their own environments for use in the cloud, giving them greater control over key material” Sol Cates, Vice President Technical Strategy at Thales e-Security

Virtru
Virtru offers business privacy, encryption and data protection for Google Cloud. Virtru lets you choose where your keys are hosted and how your content is encrypted. Whether for Google Cloud Storage, Compute Engine or G Suite, you can upload Virtru-generated keys to Google’s CSEK or use Virtru’s client-side encryption to protect content before upload. Keys may be stored on premise or in any public or private cloud."  John Ackerly, Founder and CEO of Virtru

Encryption by default, and more key management options


Recall that by default, GCP encrypts customer content stored at rest, without any action required from the customer, using one or more encryption mechanisms using keys managed server-side.

Google Cloud provides you with options to choose the approach that best suits your needs. If you prefer to manage your cloud-based keys yourself, select Cloud KMS; and if you’d like to manage keys with a partner or on-premise, select Customer-Supplied Encryption Keys.
Safe computing!

ASP.NET Core containers run great on GCP



With the recent release of ASP.NET Core, the .NET community has a cross-platform, open-source option that allows you to run Docker containers on Google App Engine and manage containerized ASP.NET Core apps with Kubernetes. In addition, we announced beta support for ASP.NET Core on App Engine flexible environment last week at Google Cloud Next. In this post, you’ll learn more about that as well as about support for Container Engine and how we integrate this support into Visual Studio and into Stackdriver!

ASP.NET Core on App Engine Flexible Environment

Support for ASP.NET Core on App Engine means that you can publish your ASP.NET Core app to App Engine (running on Linux inside a Docker container). To do so, you’ll need an app.yaml  that looks like this:

runtime: aspnetcore
env: flex

Use the “runtime” setting of “aspnetcore” to get a Google-maintained and supported ASP.NET Core base Docker image. The new ASP.NET Core runtime also provides Stackdriver Logging for any messages that are routed to standard error or standard output. You can use this runtime to deploy your ASP.NET Core apps to App Engine or to Google Container Engine.

Assuming you have your app.yaml file at the root of your project, you can publish to App Engine flexible environment with the following commands:

dotnet restore
dotnet publish -c Release
copy app.yaml .\bin\Release\netcoreapp1.0\publish\app.yaml
gcloud beta app deploy .\bin\Release\netcoreapp1.0\publish\app.yaml
gcloud app browse

In fact, you don’t even need that last command to publish that app  it just shows it once it’s been published.

ASP.NET Core on Container Engine

To publish this same app to Container Engine, you need a Kubernetes cluster and the corresponding credentials cached on your local machine:

gcloud container clusters create cluster-1
gcloud container clusters get-credentials cluster-1

To deploy your ASP.NET Core app to your cluster, you must first package it in a Docker container. You can do that with Google Cloud Container Builder, a service that builds container images in the cloud without having to have Docker installed. Instead, create a new file in the root of your project called cloudbuild.yaml with the following content:

steps:
- name: 'gcr.io/gcp-runtimes/aspnetcorebuild-1.0:latest'
- name: gcr.io/cloud-builders/docker:latest
args: [ 'build', '-t', 'gcr.io/&ltprojectid&gt/app:0.0.1', '--no-cache', '--pull', '.' ]
images:
['gcr.io/&ltprojectid&gt/app:0.0.1']

This file takes advantage of the same ASP.NET Core runtime that we used for App Engine. Replace each with the project ID where you want to run your app. To build the Docker image for your published ASP.NET Core app, run the following commands:


dotnet restore
dotnet publish -c Release
gcloud container builds submit --config=cloudbuild.yaml
.\bin\release\netcoreapp1.0\publish\

Once this is finished, you'll have an image called gcr.io/<projectid>/app:latest that you can deploy to Container Engine with the following commands:

kubectl run  --image=gcr.io//app:latest --replicas=2 
--port=8080

kubectl expose deployment  --port=80 --target-port=8080 
--type=LoadBalancer

kubectl get services

Replace <MYSERVICE> with the desired name for your service and these two commands will deploy the image to Container Engine, ensure that there are two running replicas of your service and expose an internet-facing service that load-balances requests between replicas. The final command provides the external IP address of your newly deployed ASP.NET Core service so that you can see it in action.


GCP ASP.NET Core runtime in Visual Studio

Being able to deploy from the command line is great for automated CI/CD processes. For more interactive usage, we’ve also built full support for deploying to both App Engine and Container Engine from Visual Studio via the Cloud Tools for Visual Studio extension. Once it’s installed, simply right-click on your ASP.NET Core project in the Solution Explorer, choose Publish to Google Cloud and choose where to run your code:
If you deploy to App Engine, you can choose App Engine-specific options without an app.yaml file:
Likewise, if you choose Container Engine, you receive Kubernetes-specific options that also don’t require any configuration files:
The same underlying commands are executed regardless of whether you deploy from the command line or from within Visual Studio (not counting differences between App Engine and Container Engine, of course). Choose the option that works best for you.

For more details about deploying from Visual Studio to App Engine and to Container Engine, check out the documentation. And if you’d like some help choosing between App Engine and Container Engine, the computing and hosting services section of the GCP overview provides some good guidance.


App Engine in Google Cloud Explorer

If you deploy to App Engine, the App Engine node in Cloud Explorer provides additional information about running services and versions inside Visual Studio.

The Google App Engine node lists all of the services running in your project. You can drill down into each service and see all of the versions deployed for that service, their traffic allocation and their serving status. You can perform most common operations directly from Visual Studio by right-clicking on the service, or version, including managing the service in the Cloud Console, browsing to the service or splitting traffic between versions of the service.

For more information about App Engine support for ASP.NET Core, I recommend the App Engine documentation for .NET.


Client Libraries for ASP.NET Core

There are more than 100 Google APIs available for .NET in NuGet, which means that it’s easy to get to them from the command line or from Visual Studio:
These same libraries work for both ASP.NET and ASP.NET Core, so feel free to use them from your container-based apps on GCP.


Stackdriver support for ASP.NET Core

Some of the most important libraries for you to use in your app are going to be those associated with what happens to your app once it’s running in production. As I already mentioned, simply using the ASP.NET Core runtime for GCP with your App Engine or Container Engine apps automatically routes the standard and error output to Stackdriver Logging. However, for more structured log statements, you can also use the Stackdriver logging API for ASP.NET Core directly:


using Google.Cloud.Diagnostics.AspNetCore;
...
public void Configure(ILoggerFactory loggerFactory) {
    loggerFactory.AddGoogle("<projectid>");
}
...
public void LogMessage(ILoggerFactory loggerFactory) {
    var logger = loggerFactory.CreateLogger("[My Logger Name]");
    logger.LogInformation("This is a log message.");
}


To see your log entries, go to the Stackdriver Logging page. If you want to track unhandled exceptions from your ASP.NET Core app so that they show up in Stackdriver Error Reporting, you can do that too:

public void Configure(IApplicationBuilder app) {
    string projectId = "";
    string serviceName = "";
    string version = "";
    app.UseGoogleExceptionLogging(projectId, serviceName, version);
}


To see unhandled exceptions, go to Stackdriver Error Reporting. Finally, if you want to trace the performance of incoming HTTP requests to ASP.NET Core, you can set that up like so:

public void ConfigureServices(IServiceCollection services) {
    services.AddGoogleTrace("");
}
...
public void Configure(IApplicationBuilder app) {
    app.UseGoogleTrace();
}

To see how your app performs, go to the Stackdriver Trace page for detailed reports. For example, this report shows a timeline of how a frontend interacted with a backend and how the backend interacted with Datastore:
Stackdriver integration into ASP.NET Core lets you use Logging, Error Reporting and Trace to monitor how well your app is doing in production quickly and easily. For more details, check out the documentation for Google.Cloud.Diagnostics.AspNetCore.

Where are we?

As containers become more central to app packaging and deployment, the GCP ASP.NET Core runtime lets you bring your ASP.NET skills, processes and assets to GCP. You get a Google-supported and maintained runtime and unstructured logging out of the box, as well as easy integration into Stackdriver Logging, Error Reporting and Trace. Further, you get all of the Google APIs in NuGet that support ASP.NET Core apps. And finally, you can choose between automated deployment processes from the command line, or interactive deployment and resource management from inside of Visual Studio.

Combine that with Google’s deep expertise in containers exposed via App Engine flexible environment and Google Container Engine (our hosted Kubernetes offering), and you get a great place to run your ASP.NET Core apps and services.

Google Cloud Functions: a serverless environment to build and connect cloud services



Developers rely on many cloud services to build their apps today: everything from storage and messaging services like Google Cloud Storage and Google Cloud Pub/Sub and mobile development platforms like Firebase, to data and analytics platforms like Google Cloud Dataflow and Google BigQuery. As developers consume more cloud services from their applications, it becomes increasingly complex to coordinate them and ensure they all work together seamlessly. Last week at Google Cloud Next '17, we announced the public beta of a new capability for Google Cloud Platform (GCP) called Google Cloud Functions that allows developers to connect services together and extend their behavior with code, or to build brand new services using a completely serverless approach.

With Cloud Functions you write simple, single-purpose functions that are attached to events emitted from cloud services. Your Cloud Function is triggered when an event being watched is fired. Your code executes in a fully managed environment and can effectively connect or extend services in Google’s cloud, or services in other clouds across the internet; no need to provision any infrastructure or worry about managing servers. A function can scale from a few invocations a day to many millions of invocations without any work from you, and you only pay while your function is executing.

Asynchronous workloads like lightweight ETL, or cloud automation tasks such as triggering an application build no longer require an always-on server that's manually connected to the event source. You simply deploy a Cloud Function bound to the event you want and you're done.
"Semios uses Google Cloud Functions as a critical part of our data ingestion pipeline, which asynchronously aggregates micro-climate telemetry data from our IoT network of 150,000 in-field sensors to give growers real-time insights about their orchards."
— Maysam Emadi, Data Scientist, Semios
Cloud Function’s fine-grained nature also makes it a perfect candidate for building lightweight APIs, microservices and webhooks. HTTP endpoints are automatically configured when you deploy a function you intend to trigger using HTTP — no complicated configuration (or integration with other products) required. Simply deploy your function with an HTTP trigger, and we'll give you back a secure URL you can curl immediately.
"At Vroom, we work with a number of partners to market our services and provide us with leads. Google Cloud Functions makes integration with these partners as simple as publishing a new webhook, which scales automatically with use, all without having to manage a single machine." — Benjamin Rothschild, Director of Analytics, Vroom
If you're a mobile developer using Firebase, you can now connect your Firebase app to one or more Cloud Functions by binding a Cloud Function to mutation events in the Firebase Realtime Database, events from Firebase Authentication, and even execute a Cloud Function in response to a conversion event in Firebase Analytics. You can find out more about this Firebase integration at https://firebase.google.com/features/functions.

Cloud Functions also empowers developers to quickly and easily build messaging bots and create custom actions for Google Assistant.
“At Meetup, we wanted to improve developer productivity by integrating task management with Slack. Google Cloud Functions made this integration as simple as publishing a new HTTP function. We’ve now rolled the tool out across the entire organization without ever touching a server or VM.” — Jose Rodriguez, Lead of Engineering Effectiveness, Meetup
In our commitment to openness, Cloud Functions uses only standard, off-the-shelf runtimes and doesn’t require any proprietary modules or libraries in your code: your functions will just work. In addition, the execution environment doesn't rely on a proprietary or forked operating system, which means your dependencies have native library compatibility. We currently support the Node.js runtime and have a set of open source Node.js client libraries for connecting to a wide range of GCP services.

As part of the built-in deployment pipeline we'll resolve all dependencies by running npm install for you (or npm rebuild if you provide packages that require compilation), so you don't have to worry about building for a specific environment. We also have an open source local emulator so you can build and quickly iterate on your Cloud Functions from your local machine.
"Node.js is continually growing across the cloud, especially when it comes to the container and serverless space. This new offering from Google, built in collaboration with the open source community, will provide even more options to the Node.js community going forward.” — Mikeal Rogers, Community Manager, Node.js Foundation
Head over to our quickstart guide to dive right in! Best of all, we've created a generous free tier to allow you to experiment, prototype and play with the product without spending a dime. You can find out more on our pricing page.

We look forward to seeing what you create with Cloud Functions. We’d love to hear your feedback on StackOverflow.

Your favorite languages, now on Google App Engine



Since 2008, Google App Engine has made it easy to build web applications, APIs and mobile backends at Google scale. Our core goal has always been to let developers focus on code, while we handle the rest. Liberated from the need to manage and patch servers, hand-hold rollouts and maintain infrastructure, organizations from startups to Fortune 500 companies have been able to achieve unprecedented time to market, scale and agility on our platform.

At Google Cloud Next last week, we delivered on the promise of Google App Engine while evolving the platform toward the openness and flexibility that developers demand. Any language. Any framework. Any library. The App Engine team is thrilled that the App Engine flexible environment is now generally available.



Your favorite languages, libraries and tools

General availability means support for Node.js, Ruby, Java 8, Python 2.7 or 3.5, and Go 1.8 on App Engine. All of these runtimes are containerized, and are of course available as open source on GitHub.

If we don’t have support for the language you want to use, bring your own. If it runs in a Docker container, you can run it on App Engine. Like Swift? Want to run Perl? Love Elixir? Need to migrate your Parse app? You can do all this and more in App Engine.

In addition to the GA supported runtimes, we’re also excited to announce two new beta runtimes today: ASP.NET Core and PHP 7.1.

ASP.NET Core on App Engine goes beta

With this release, we also announced beta support for ASP.NET Core on App Engine. This is a great choice for developers building web applications with C# and .NET Core who want to enjoy the benefits of running on App Engine. The Google Cloud .NET client libraries make it easy to use the full breadth of Google Cloud services from your application, and are currently available on NuGet.

To make developing applications for .NET core on GCP even better, we’ve added support for deploying your apps directly with the Cloud Tools for Visual Studio extension.


To get started, check out the App Engine for .NET getting started guide.

PHP 7.1 on App Engine goes beta

Along with .NET support, PHP 7.1 support on App Engine is now beta. This runtime allows you to choose between PHP 5.6, 7.0, or 7.1. There are step-by-step guides for running Symfony, Laravel or Drupal and our client libraries make it easy to take advantage of Google Cloud Platform’s advanced APIs and services.

To get started, check out the App Engine for PHP getting started guide.

Our commitment to open source

At Google, we’re committed to open source and open development. The Docker-based App Engine runtimes, the client libraries, the tooling  all open source, and available on GitHub.



The best part about these runtimes and libraries is that they run anywhere that supports a Docker-based environment. The code you write for App Engine works across App Engine, Google Container Engine or Google Compute Engine. You can even grab your Docker image and run it on your own infrastructure.

We’re excited to welcome developers of all languages to App Engine. We’d like to extend a warm welcome to Node.js, Ruby and .NET developers, and we’re committed to making further investments to help make you as productive as possible.

If you’re an App Engine developer who loves the unique features of the standard environment, we’ve got more coming for you too. Over the next few months, we’ll be rolling out support for Java 8, updated libraries and improved connectivity with other GCP services. Developers that sign up for the alpha release of Java 8 on the App Engine standard environment can get started today. You can expect multiple announcements on both the standard and flexible environments in App Engine in the coming months.

We can’t wait to hear what you think. If you’re new to Google Cloud Platform (GCP), make sure to sign up and give it a try. Feel free to reach out to us on Twitter @googlecloud, or request an invite to the Google Cloud Slack community and join the #appengine channel.




Google Cloud Platform: your Next home in the cloud



San Francisco Today at Google Cloud Next ‘17, we’re thrilled to announce new Google Cloud Platform (GCP) products, technologies and services that will help you imagine, build and run the next generation of cloud applications on our platform.

Bring your code to App Engine, we’ll handle the rest

In 2008, we launched Google App Engine, a pioneering serverless runtime environment that lets developers build web apps, APIs and mobile backends at Google-scale and speed. For nearly 10 years, some of the most innovative companies built applications that serve their users all over the world on top of App Engine. Today, we’re excited to announce into general availability a major expansion of App Engine centered around openness and developer choice that keeps App Engine’s original promise to developers: bring your code, we’ll handle the rest.

App Engine now supports Node.js, Ruby, Java 8, Python 2.7 or 3.5, Go 1.8, plus PHP 7.1 and .NET Core, both in beta, all backed by App Engine’s 99.95% SLA. Our managed runtimes make it easy to start with your favorite languages and use the open source libraries and packages of your choice. Need something different than what’s out of the box? Break the glass and go beyond our managed runtimes by supplying your own Docker container, which makes it simple to run any language, library or framework on App Engine.

The future of cloud is open: take your app to-go by having App Engine generate a Docker container containing your app and deploy it to any container-based environment, on or off GCP. App Engine gives developers an open platform while still providing a fully managed environment where developers focus only on code and on their users.


Cloud Functions public beta at your service

Up one level from fully managed applications, we’re launching Google Cloud Functions into public beta. Cloud Functions is a completely serverless environment to build and connect cloud services without having to manage infrastructure. It’s the smallest unit of compute offered by GCP and is able to spin up a single function and spin it back down instantly. Because of this, billing occurs only while the function is executing, metered to the nearest one hundred milliseconds.

Cloud Functions is a great way to build lightweight backends, and to extend the functionality of existing services. For example, Cloud Functions can respond to file changes in Google Cloud Storage or incoming Google Cloud Pub/Sub messages, perform lightweight data processing/ETL jobs or provide a layer of logic to respond to webhooks emitted by any event on the internet. Developers can securely invoke Cloud Functions directly over HTTP right out of the box without the need for any add-on services.

Cloud Functions is also a great option for mobile developers using Firebase, allowing them to build backends integrated with the Firebase platform. Cloud Functions for Firebase handles events emitted from the Firebase Realtime Database, Firebase Authentication and Firebase Analytics.

Growing the Google BigQuery universe: introducing BigQuery Data Transfer Service

Since our earliest days, our customers turned to Google to promote their advertising messages around the world, at a scale that was previously unimaginable. Today, those same customers want to use BigQuery, our powerful data analytics service, to better understand how users interact with those campaigns. With that, we’ve developed deeper integration between broader Google and GCP with the public beta of the BigQuery Data Transfer Service, which automates data movement from select Google applications directly into BigQuery. With BigQuery Data Transfer Service, marketing and business analysts can easily export data from Adwords, DoubleClick and YouTube directly into BigQuery, making it available for immediate analysis and visualization using the extensive set of tools in the BigQuery ecosystem.

Slashing data preparation time with Google Cloud Dataprep

In fact, our goal is to make it easy to import data into BigQuery, while keeping it secure. Google Cloud Dataprep is a new serverless browser-based service that can dramatically cut the time it takes to prepare data for analysis, which represents about 80% of the work that data scientists do. It intelligently connects to your data source, identifies data types, identifies anomalies and suggests data transformations. Data scientists can then visualize their data schemas until they're happy with the proposed data transformation. Dataprep then creates a data pipeline in Google Cloud Dataflow, cleans the data and exports it to BigQuery or other destinations. In other words, you can now prepare structured and unstructured data for analysis with clicks, not code. For more information on Dataprep, apply to be part of the private beta. Also, you’ll find more news about our latest database and data and analytics capabilities here and here.

Hello, (more) world

Not only are we working hard on bringing you new products and capabilities, but we want your users to access them quickly and securely  wherever they may be. That’s why we’re announcing three new Google Cloud Platform regions: California, Montreal and the Netherlands. These will bring the total number of Google Cloud regions up from six today, to more than 17 locations in the future. These new regions will deliver lower latency for customers in adjacent geographic areas, increased scalability and more disaster recovery options. Like other Google Cloud regions, the new regions will feature a minimum of three zones, benefit from Google’s global, private fibre network and offer a complement of GCP services.

Supercharging our infrastructure . . .

Customers run demanding workloads on GCP, and we're constantly striving to improve the performance of our VMs. For instance, we were honored to be the first public cloud provider to run Intel Skylake, a custom Xeon chip that delivers significant enhancements for compute-heavy workloads and a larger range of VM memory and CPU options.

We’re also doubling the number of vCPUs you can run in an instance from 32 to 64 and now offering up to 416GB of memory, which customers have asked us for as they move large enterprise applications to Google Cloud. Meanwhile, we recently began offering GPUs, which provide substantial performance improvements to parallel workloads like training machine learning models.

To continually unlock new energy sources, Schlumberger collects large quantities of data to build detailed subsurface earth models based on acoustic measurements, and GCP compute infrastructure has the unique characteristics that match Schlumberger's needs to turn this data into insights. High performance scientific computing is integral to its business, so GCP's flexibility is critical.

Schlumberger can mix and match GPUs and CPUs and dynamically create different shapes and types of virtual machines, choosing memory and storage options on demand.

"We are now leveraging the strengths offered by cloud computation stacks to bring our data processing to the next level. Ashok Belani, Executive Vice President Technology, Schlumberger

. . . without supercharging our prices

We aim to keep costs low. Today we announced Committed Use Discounts that provide up to 57% off the list price on Google Compute Engine, in exchange for a one or three year purchase commitment. Committed Use Discounts are based on the total amount of CPU and RAM you purchase, and give you the flexibility to use different instance and machine types; they apply automatically, even if you change instance types (or size). There are no upfront costs with Committed Use Discounts, and they are billed monthly. What’s more, we automatically apply Sustained Use Discounts to any additional usage above a commitment.

We're also dropping prices for Compute Engine. The specific cuts vary by region. Customers in the United States will see a 5% price drop; customers in Europe will see a 4.9% drop and customers using our Tokyo region an 8% drop.

Then there’s our improved Free Tier. First, we’ve extended the free trial from 60 days to 12 months, allowing you to use your $300 credit across all GCP services and APIs, at your own pace and on your own schedule. Second, we’re introducing new Always Free products  non-expiring usage limits that you can use to test and develop applications at no cost. New additions include Compute Engine, Cloud Pub/Sub, Google Cloud Storage and Cloud Functions, bringing the number of Always Free products up to 15, and broadening the horizons for developers getting started on GCP. Visit the Google Cloud Platform Free Tier page today for further details, terms, eligibility and to sign up.

We'll be diving into all of these product announcements in much more detail in the coming days, so stay tuned!

Google Cloud Platform bolsters support for relational databases



San Francisco Today, we announced new offerings in GCP’s database-services portfolio to give customers even more freedom to focus on building great apps for more use cases, rather than on management details.

In the early days of cloud computing, developers were constrained by the relatively limited choice of database services for production use cases, whether they were replacing on-premise apps or building new ones.

Those constraints have now virtually disappeared. With the announcement of Google Cloud Spanner last month, Google Cloud can meet the most stringent customer requirements for consistency, availability, and scalability in transactional database applications.

Cloud Spanner joins Google Cloud Datastore, Google Cloud Bigtable and Google Cloud SQL to deliver a complete set of databases on which developers can build great applications across a spectrum of use cases without being part-time DBAs. Furthermore, many third-parties have joined the Cloud Spanner ecosystem: Xplenty now supports data transfer to Cloud Spanner, iCharts, Looker, MicroStrategy and Zoomdata provide visual data analytics, and more partners are on their way.

Today, at Google Cloud NEXT ‘17, we're pleased to continue this story with the following announcements.

Cloud SQL for PostgreSQL (Beta)


With the beta availability of Cloud SQL for PostgreSQL in the coming week, it will easier to more securely connect to a database from just about any application, anywhere.

Cloud SQL for PostgreSQL implements the same design principles currently reflected in Cloud SQL for MySQL: namely, the ability to securely store and connect to your relational data via open standards. It also includes all the familiar advantages of a Google Cloud service in particular, the ability to focus on application development, rather than on tedious infrastructure-management operations.

Here’s how Descartes Labs, which uses machine learning to analyze and predict changes in US food supply based on satellite imagery, is already getting value from Cloud SQL for PostgreSQL:
Cloud SQL gives us more time to work on products that provide value to our customers. Our individual teams, who are building micro services, can quickly provision a database on Cloud SQL. They don't need to bother compiling Geos, Proj4, GDAL and Lib2xml to leverage PostGIS. And when PostGIS isn’t needed, our teams use PostgreSQL without extensions or MySQL, also supported by Cloud SQL.”  Tim Kelton, Co-founder and Cloud Architect, Descartes Labs

Getting started with Cloud SQL is easier than ever thanks to a growing list of partners. Partners already supporting Cloud SQL for PostgreSQL include Alooma, Informatica, Segment and Xplenty for data integration, and ChartIO, iCharts, Looker, Metabase and Zoomdata for visual analytics.

Thanks to your feedback, Cloud SQL for PostgreSQL will continue to improve during the beta period; we look forward to hearing about your experiences!

Improved support for MySQL and SQL Server Enterprise 


We have news about other relational-database offerings, as well:
  • Cloud SQL for MySQL improvements: Increased performance for demanding workloads via 32-core instances with up to 208GB of RAM, and central management of resources via Identity and Access Management (IAM) controls
  • Enhanced Microsoft SQL Server support: We announced availability for SQL Server Enterprise images in beta earlier this year; today, we're announcing that SQL Server Enterprise images on Google Compute Engine, and support for Windows Server Failover Clustering (WSFC) and SQL Server AlwaysOn Availability Groups, are now both in GA.

Improved SSD Persistent Disk performance

SSD persistent disks now have increased throughput and IOPS performance, which are particularly beneficial for database and analytics workloads. Instances with 32 vCPUs provide up to 40k read IOPS and 30k write IOPS, as well as 800 MB/s of read throughput and 400 MB/s of write throughput. Instances with 16-31 vCPUs provide up to 25k read or write IOPS, 480 MB/s of read throughput, and 240 MB/S of write throughput. Refer to these docs for complete details about Persistent Disk performance limits.


Federated query on Cloud Bigtable


Finally, we're extending BigQuery's reach to query data inside Google Cloud Bigtable, the NoSQL database service designed for massive analytic or operational workloads that require low latency and high throughput (particularly common in Financial Services and IoT use cases). BigQuery users can already query data in Google Cloud Storage, Google Drive and Google Sheets; the ability to query data in Cloud Bigtable is the next step toward a seamless cloud platform in which data of all kinds can be analyzed conveniently via BigQuery, without the need to copy it across systems.

Next steps


With these announcements, developers now have more choices for moving workloads to the cloud than ever before, and greater freedom to focus on building the best possible apps. We urge you to sign up for a $300 credit to try Cloud SQL and the rest of GCP. Start with inexpensive micro instances for testing and development; when you’re ready, you can easily scale them to serve performance-intensive applications.