Tag Archives: Developer Tools & Insights

Five can’t-miss application development sessions at Google Cloud Next ‘18

Google Cloud Next ‘18 will be a developer’s paradise, with bootcamps, hands-on labs, and yes, breakout sessions—more than 60 dedicated to app dev in some form or another. And that’s before we get to the Spotlight sessions explaining new product launches! We polled developer advocates and product managers from across Google Cloud, and here are their picks for the sessions you can’t afford to miss.

1. From Zero to Production: Build a Production-Ready Deployment Pipeline for Your Next App

Scott Feinberg, Customer Engineer, Google Cloud

Want to start deploying to Google Cloud Platform (GCP) but aren't sure how to start? In this session, you'll take an app with multiple process types, containerize it, and build a deployment pipeline with Container Builder to test and deploy your code to a Kubernetes Engine cluster.

Register for the session here.

2. Enterprise-Grade Mobile Apps with Firebase

Michael McDonald, Product Manager and Jonathan Shriver-Blake, Product Manager, Google Firebase

Firebase helps mobile development teams build better apps, improve app quality, and grow their business. But before you can use it in your enterprise, you’ll have to answer a number of questions: Will it scale in production? Is it reliable, and can your team monitor it? How do you control who has access to production data? What will the lawyers say? And how about compliance and GDPR? This session will show you the answers to these questions and pave the way to use Firebase in your enterprise.

Click here to reserve your spot.

3. Migrating to Cloud Spanner

Niel Markwick, Solutions Architect and Sami Zuhuruddin, Staff Solutions Architect, Google Cloud

When migrating an existing database to Cloud Spanner, an essential step is importing the existing data. This session describes the steps required to migrate the data and any pitfalls that need to be dealt with during the process. We'll cover what it looks like to transition to Cloud Spanner, including schema migration, data movement, cutover, and application changes. To make it real, we'll be looking at migrating from two popular systems: one NoSQL and the other SQL.

Find more details about the session here.

4. Serverless Compute on Google Cloud: What's New

Myles Borins, Developer Advocate and Jason Polites, Product Manager, Google

Join us to learn what’s new in serverless compute on GCP. We will share the latest developments in App Engine and Cloud Functions and show you how you can benefit from new feature releases. You will also get a sneak peek and preview of what’s coming next.

Secure your spot today.

5. Accelerating Your Kubernetes Development with Kubernetes Applications

Konrad Delong, Senior Software Engineer; David Eustis, Senior Staff Software Engineer; and Kenneth Owens, Software Engineer, Google

Kubernetes applications provide a new, powerful abstraction for you to compose and re-use application building blocks from a variety of sources. In this talk, we’ll show you how to accelerate your development process by taking advantage of Kubernetes applications. We’ll walk you through creating these applications and deploying third-party, commercial Kubernetes applications from the Google Cloud Marketplace.

Click here to register for this session.

And if you haven’t already registered for Next, don’t delay! Everyone who attends will receive $500 in GCP credits. Imagine the possibilities!

Why we believe in an open cloud



Open clouds matter more now than ever. While most companies today use a single public cloud provider in addition to their on-premises environment, research shows that most companies will likely adopt multiple public and private clouds in the coming years. In fact, according to a 2018 Rightscale study, 81-percent of enterprises with 1,000 or more employees have a multi-cloud strategy, and if you consider SaaS, most organizations are doing multi-cloud already.

Open clouds let customers freely choose which combination of services and providers will best meet their needs over time. Open clouds let customers orchestrate their infrastructure effectively across hybrid-cloud environments.

We believe in three principles for an open cloud:
  1. Open is about the power to pick up an app and move it—to and from on-premises, our cloud, or another cloud—at any time.
  2. Open-source software permits a richness of thought and continuous feedback loop with users.
  3. Open APIs preserve everyone’s ability to build on each other’s work.

1. Open is about the power to pick up an app and move it

An open cloud is grounded in a belief that being tied to a particular cloud shouldn’t get in the way of achieving your goals. An open cloud embraces the idea that the power to deliver your apps to different clouds while using a common development and operations approach will help you meet whatever your priority is at any given time—whether that’s making the most of skills shared widely across your teams or rapidly accelerating innovation. Open source is an enabler of open clouds because open source in the cloud preserves your control over where you deploy your IT investments. For example, customers are using Kubernetes to manage containers and TensorFlow to build machine learning models on-premises and on multiple clouds.

2. Open-source software permits a richness of thought and continuous feedback loop with users

Through the continuous feedback loop with users, open source software (OSS) results in better software, faster, and requires substantial time and investment on the part of the people and companies leading open source projects. Here are examples of Google’s commitment to OSS and the varying levels of work required:
  • OSS such as Android, has an open code base and development is the sole responsibility of one organization.
  • OSS with community-driven changes such as TensorFlow, involves coordination between many companies and individuals.
  • OSS with community-driven strategy, for example collaboration with the Linux Foundation and Kubernetes community, involves collaborative, decision-making and accepting consensus over control.
Open source is so important to Google that we call it out twice in our corporate philosophies, and we encourage employees, and in fact all developers, to engage with open source.

Using BigQuery to analyze GHarchive.org data, we found that in 2017, over 5,500 Googlers submitted code to nearly 26,000 repositories, created over 215,000 pull requests, and engaged with countless communities through almost 450,000 comments. A comparative analysis of Google’s contribution to open source provides a useful relative position of the leading companies in open source based on normalized data.

Googlers are active contributors to popular projects you may have heard of including Linux, LLVM, Samba, and Git.

Google regularly open-sources internal projects

Top Google-initiated projects include:

3. Open APIs preserve everyone’s ability to build on each other’s work

Open APIs preserve everyone’s ability to build on each other’s work, improving software iteratively and collaboratively. Open APIs empower companies and individual developers to change service providers at will. Peer-reviewed research shows that open APIs drive faster innovation across the industry and in any given ecosystem. Open APIs depend on the right to reuse established APIs by creating independent-yet-compatible implementations. Google is committed to supporting open APIs via our membership in the Open API Initiative, involvement in the Open API specification, support of gRPC, via Cloud Bigtable compatibility with the HBase API, Cloud Spanner and BigQuery compatibility with SQL:2011 (with extensions), and Cloud Storage compatibility with shared APIs.

Build an open cloud with us

If you believe in an open cloud like we do, we’d love your participation. You can help by contributing to and using open source libraries, and asking your infrastructure and cloud vendors what they’re doing to keep workloads free from lock-in. We believe open ecosystems grow the fastest and are more resilient and adaptable in the face of change. Like you, we’re in it for the long-term.



It’s worth noting that not all Google’s products will be open in every way at every stage of their life cycle. Openness is less of an absolute and more of a mindset when conducting business in general. You can, however, expect Google Cloud to continue investing in openness across our products over time, to contribute to open source projects, and to open source some of our internal projects.

If you believe open clouds are an important part of making this multi-cloud world a place in which everyone can thrive, we encourage you to check out our new open cloud website where we offer more detailed definitions and examples of the terms, concepts, and ideas we’ve discussed here: cloud.google.com/opencloud.

Google Cloud for Electronic Design Automation: new partners



A popular enterprise use case for Google Cloud is electronic design automation (EDA)—designing electronic systems such as integrated circuits and printed circuit boards. EDA workloads, like simulations and field solvers, can be incredibly computationally intensive. They may require a few thousand CPUs, sometimes even a few hundred thousand CPUs, but only for the duration of the run. Instead of building up massive server farms that are oversubscribed during peak times and sit idle for the rest of the time, you can use Google Cloud Platform (GCP) compute and storage resources to implement large-scale modeling and simulation grids.

Our partnerships with software and service providers make Google Cloud an even stronger platform for EDA. These solutions deliver elastic infrastructure and improved time-to-market for customers like eSilicon, as described here.

Scalable simulation capacity on GCP provided by Metrics Technologies (more details below)

This week at Design Automation Conference, we’re showcasing a first-of-its-kind implementation of EDA in the cloud: our implementation of the Synopsys VCS simulation solution for internal EDA workloads on Google Cloud, by the Google Hardware Engineering team. We also have several new partnerships to help you achieve operational and engineering excellence through cloud computing, including:

  • Metrics Technologies is the first EDA platform provider of cloud-based SystemVerilog simulation and verification management, accelerating the move of semiconductor verification workloads into the cloud. The Metrics Cloud Simulator and Verification Manager, a pay-by-the-minute software-as-a-service (SaaS) solution built entirely on GCP, improves resource utilization and engineering productivity, and can scale capacity with variable demand. Simulation resources are dynamically adjusted up or down by the minute without the need to purchase additional hardware or licenses, or manage disk space. You can find Metrics news and reviews at www.metrics/news.ca, or schedule a demo at DAC 2018 at www.metrics.ca.
  • Elastifile delivers enterprise-grade, scalable file storage on Google Cloud. Powered by a high-performance, POSIX-compliant distributed file system with integrated object tiering, Elastifile simplifies storage and data management for EDA workflows. Deployable in minutes via Google Cloud Launcher, Elastifile enables cloud-accelerated circuit design and verification, with no changes required to existing tools and scripts.
  • NetApp is a leading provider of high-performance storage solutions. NetApp is launching Cloud Volumes for Google Cloud Platform, which is currently available in Private Preview. With NetApp Cloud Volumes, GCP customers have access to a fully-managed, familiar file storage (NFS) service with a cloud native experience.
  • Quobyte provides a parallel, distributed, POSIX-compatible file system that runs on GCP and on-premises to provide petabytes of storage and millions of IOPS. As a distributed file system, Quobyte scales IOPS and throughput linearly with the number of nodes–avoiding the performance bottlenecks of clustered or single filer solutions. You can try Quobyte today on the Cloud Launcher Marketplace.
If you’d like to learn more about EDA offerings on Google Cloud, we encourage you to visit us at booth 1251 at DAC 2018. And if you’re interested in learning more about how our Hardware Engineering team’s used Synopsys VCS on Google Cloud for internal Google workloads, please stop by Design Infrastructure Alley on Tuesday for a talk by team members Richard Ho and Ravi Rajamani. Hope to see you there!

How to connect Stackdriver to external monitoring



Google Stackdriver lets you track your cloud-powered applications with monitoring, logging and diagnostics. Using Stackdriver to monitor Google Cloud Platform (GCP) or Amazon Web Services (AWS) projects has many advantages—you get detailed performance data and can set up tailored alerts. However, we know from our customers that many businesses are bridging cloud and on-premises environments. In these hybrid situations, it’s often necessary to also connect Stackdriver to an on-prem monitoring system. This is especially important if there is already a monitoring process in place that involves classic IT Business Management (ITBM) tasks, like opening and closing tickets and incidents automatically.

Luckily, you can use Stackdriver for these circumstances by enabling the alerting policies via webhooks. We’ll explain how in this blog post, using the example of monitoring the uptime of a web server. Setting up the monitoring condition and alerting policy is really where Stackdriver shines, since it auto-detects GCP instances and can analyze log files. This differs depending on the customer environment. (You can also find more here about alerting and incident management in Stackdriver.)

Get started with server and firewall policies to external monitoring

To keep it simple, we’ll start with explaining how to do an HTTP check on a freshly installed web server (nginx). This is called an uptime check in Stackdriver.

First, let’s set up the server and firewall policy. In order for the check to be successful, make sure you’ve created a firewall rule in the GCP console that allows HTTP traffic to the public IP of the web server. The best way to do that is to create a tag-based firewall rule that allows all IP addresses (0.0.0.0/0) on the tag “http.” You can now add that tag to your newly created web server instance. (We created ours by creating a micro instance using Ubuntu image, then installing nginx using apt-get).

If you prefer containers, you can use Kubernetes to spin up an nginx container.

Make sure to check the firewall rule by manually adding your public IP in a browser. If all is configured correctly, you should see the nginx greeting page:

Setting up the uptime check

Now let’s set up the website uptime check. Open the Stackdriver monitoring menu in your GCP cloud console.

In this case, we created a little web server instance with a public IP address. We want to monitor this public IP address to check the web server’s uptime. To set this up, select “Uptime Checks” from the right-side menu of the Stackdriver monitoring page.

Remember: This is a test case, so we set the check interval to one minute. For real-world use cases, this value might change according to the service monitoring requirements.

Once you have set up the Uptime Check, you can now go ahead and set up an alerting policy. Click on “Create New Policy” in the following popup window (only appears the first time you create an Uptime Check). Or you can click on “Alerting” on the left-side Stackdriver menu to set it up. Click on “Create a Policy” in the popup menu.

Setting up the alert policy

Once you click on “Create a Policy,” you should see a new popup with four steps to complete.

The first step will ask for a condition “when” to trigger the alert. This is where you have to make sure the Uptime Check is added. To do this, simply click on the “Add Condition” button.

A new window will appear from the right side:

Specify the Uptime Check by clicking on Select under “Basic Health.”

This will bring up this window (also from the right side) to select the specific Uptime Check to alert on. Simply choose “URL” in the “Resource Type” field and the “IF UPTIME CHECK” section will appear automatically. Here, we select the previously created Uptime Check.


You can also set the duration of the service downtime to trigger an alert. In this case, we used the default of five minutes. Click “Save Condition” to continue with the Alert Policy setup.

This leads us to step two:

This is where things get interesting. In order to include an external monitoring system, you can use so-called webhooks. Those are typically callouts using an HTTP POST method to send JSON formatted messages to the external system. The on-prem or third-party monitoring system needs to understand this format in order to be used properly. Typically, there’s wide support in the monitoring system industry for receiving and using webhooks.

Setting up the alerts

Now you’ll set up the alerts. In this example, we’re configuring a webhook only. You can set up multiple ways to get alerted simultaneously. If you want to get an email and a webhook at the same time, just configure it that way by adding the second (or third) method. In this example, we’ll use a free webhook receiver to monitor if our setup works properly.

Once the site has generated a webhook receiver for you, you’ll have a link you can use that will list all received tokens for you. Remember, this is for testing purposes only. Do not send in any user-specific data such as private IP addresses or service names.

Next you have to configure the notification to use a webhook so it’ll send a message over to our shiny new webhook receiver. Click on “Add Notification.”

By default a field will appear saying “Email”—click on the drop-down arrow to see the other options:

Select “Webhook” in the drop-down menu.

The system will most properly tell you that there is no webhook setup present. That’s because you haven’t specified any webhook receiver yet. Click on “Setup Webhook.”

(If you’ve already set up a webhook receiver, the system won’t offer you this option here.)

Therefore you need to go to the “select project” dropdown list (top left side, right next to the Stackdriver logo in the gray bar area). Click on the down arrow symbol (next to your project ID) and see at the bottom of the drop-down box the option “Account Settings.”

In the popup window, select “Notifications” (bottom of the left-side list under “Settings”) and then click on “Webhooks” at the top menu. Here you can add additional webhooks if needed.

Click on “Create webhook.”

Remember to put in your webhook endpoint URL. In our test case, we do not need any authentication.

Click on “Test Connection” to verify and see your first webhook appearing on the test site!

It should say “This is a test alert notification from Stackdriver.”

Now let’s continue with the Alerting Policy. Choose the newly created webhook by selecting “Webhook” as notification type and the webhook name (created earlier) as the target. If you want to have additional notification settings (like SMS, email, etc.), feel free to add those as well by clicking on “Add another notification.”

Once you add a notification, you can optionally add documentation by creating a so-called “Markdown document.” Learn more here about the Markdown language.

Last but not least, give the Alert Policy a descriptive name:

We decided to go super creative and call it “HTTP - uptime alert.” Once you have done this, click “Save Policy” at the bottom of the page.

Done! You just created your first policy. including a webhook to trigger alerts on incidents.

The policy should be green and the uptime check should report your service being healthy. If not, check your firewall rules.

Test your alerting

If everything is normal and works as expected, it is time to try your alerting policy. In order to do that, simply delete the “allow-http” firewall rule created earlier. This should result in a “service unavailable” condition for our Uptime Check. Remember to give it a little while. The Uptime Check will wait 10 seconds per region and overall one minute until it declares the service down (remember, we configured that here).

Now you’ll see that you can’t reach the nginx web server instance anymore:

Now let’s go to the Stackdriver overview page to see if we can find the incident. Click on “Monitoring Overview” in the left-side menu at the very top:

Indeed, the Uptime Check comes back red, telling us the service is down. Also, our Alerting Policy has created an incident saying that the “HTTP - uptime alert” has been triggered and the service has been unavailable for a couple of minutes now.

Let’s check the test receiver site to see if we got the webhook to trigger there:

You can see we got the webhook alert with the same information regarding the incident. This information is passed on using the JSON format for easy parsing at the receiving end. You will see the policy name that was triggered (first red rectangle), the state “open,” as well as the “started at” timestamp in Unix time format (seconds passed since 1970). Also, it will tell you that the service is failing in the “summary” field. If you had configured any optional documentation, you’d see it using the JSON format (HTTP post).

Bring the service back

Now, recreate the firewall rule to see if we get an “incident resolved” message.

Let’s check the overview screen again (remember to give it five or six minutes after the rule to react)

You can see that service is back up. Stackdriver automatically resolves open incidents once the condition restores. So in our case, the formerly open incident is now restored, since the Uptime Check comes back as “healthy” again. This information is also passed on using the alerting policy. Let’s see if we got a “condition restored” webhook message as well.

By the power of webhooks, it also told our test monitoring system that this incident is closed now, including useful details such as the ending time (Unix timestamp format) and a summary telling us that the service has returned to a normal state.

If you need to connect Stackdriver to a third-party monitoring system, webhooks is one extremely flexible way of doing this. It will let your operations team continue using their familiar go-to resources on-premises, while using all advantages of Stackdriver in a GCP (or AWS) environment. Furthermore, existing monitoring processes can be reused to bridge into the Google Cloud world.

Remember that Stackdriver can do far more than Uptime Checks, including log monitoring over source code monitoring, debugging and tracing user interactions with your application. Whether it’s alerting policy functionality, using the webhook messaging or other checks you could define in Stackdriver, all can be forwarded to a third-party monitoring tool. Even better, you can close incidents automatically once they have been resolved.

Have fun monitoring your cloud services!

Related content:

New ways to manage and automate your Stackdriver alerting policies
How to export logs from Stackdriver Logging: new solution documentation
Monitor your GCP environment with Cloud Security Command Center

Announcing a new certification from Google Cloud Certified: the Associate Cloud Engineer



Cloud is no longer an emerging technology. Now that businesses large and small are realizing the potential of cloud services, the need to hire individuals who can manage cloud workloads has sky-rocketed. Today, we’re launching a new Associate Cloud Engineer certification, designed to address the growing demand for individuals with the foundational cloud skills necessary to deploy applications and maintain cloud projects on Google Cloud Platform (GCP).

The Associate Cloud Engineer certification joins Professional Cloud Architect, which launched in 2016, and Data Engineer, which followed quickly thereafter. These certifications identify individuals with the skills and experience to leverage GCP to overcome complex business challenges. Since the program’s inception, Google Cloud Certified has experienced continual growth, especially this last year when the number of people sitting for our professional certifications grew by 10x.

Because cloud technology affects so many aspects of an organization, IT professionals need to know when and how to use cloud tools in a variety of scenarios, ranging from data analytics to scalability. For example, it's not enough to launch an application in the cloud. Associate Cloud Engineers also ensure that the application grows seamlessly, is properly monitored, and readily managed by authorized personnel.

Feedback from the beta launch of the Associate Cloud Engineer certification has been great. Morgan Jones, an IT professional, was eager to participate because he sees “the future of succeeding and delivering business value from the cloud is to adopt a multi-cloud strategy. This certification can really help me succeed in the GCP environment."

As an entry point to our professional-level certifications, the Associate Cloud Engineer demonstrates solid working knowledge of GCP products and technologies. “You have to have experience on the GCP Console to do well on this exam. If you haven’t used the platform and you just cram for the exam, you will not do well. The hands-on labs helped me prepare for that,” says Jones.

Partners were a major impetus in the development of the Associate Cloud Engineer exam, which will help them expand GCP knowledge throughout their organizations and address increasing demand for Google Cloud technologies head-on. Their enthusiastic response to news of this exam sends signals that the Associate Cloud Engineer will be a catalyst for an array of opportunities for those early in their cloud career.

"We are really excited for the Associate Cloud Engineer to come to market. It allows us to target multiple role profiles within our company to drive greater knowledge and expertise of Google Cloud technologies across our various managed services offerings."
- Luvlynn McAllister, Rackspace, Director, Sales Strategy & Business Operations

The Associate Cloud Engineer exam is:
  • Two hours long
  • Recommended for IT professionals with six months of GCP experience
  • Available for a registration fee of $125 USD
  • Currently available in English
  • Available at Next ‘18 for registered attendees

The Google Cloud training team offers numerous ways to increase your Google Cloud know-how. Join our webinar on July 10 at 10:30am to hear from members of the team who developed the exam about how this certification differs from others in our program and how to best prepare. If you still want to check your readiness, take the online practice exam at no charge. For more information on suggested training and an exam guide, visit our website. Register for the exam today.

How to run SAP Fiori Front-End Server (OpenUI5) on GCP in 20 mins



Who enjoys doing risky development on their SAP system? No one. But if you need to build enterprise apps that use your SAP backend, not doing development is a non-starter. One solution is to apply Gartner’s Bimodal IT, the practice of managing two separate but coherent styles of work: one focused on predictability; the other on exploration. This is an awesome strategy for creating frontend innovation with modern HTML5 / JS applications that are loosely coupled to backend core ERP system, reducing risk. And it turns out that Google Cloud Platform (GCP) can be a great way to do Bimodal IT in a highly cost-effective way.

This blog walks through setting up SAP OpenUI5 on a GCP instance running a local node.js webserver to run sample apps. These apps can be the building blocks to develop new enterprise apps in the cloud without impacting your SAP backend. Let’s take a deeper look.

Set up your GCP account:

Make sure that you have set up your GCP free trial ($300 credit):
https://cloud.google.com/free/

After signing up, you can access GCP at
https://console.cloud.google.com

Everything in GCP happens in a project so we need to create one and enable billing (this uses your $300 free credit).

From the GCP Console, select or create a project by clicking the GO TO THE MANAGE RESOURCES PAGE

Make sure that billing is enabled (using your $300 free credit):

Setting up SAP OpenUI5 in GCP


1. Create a compute instance (virtual machine):


In the top left corner click on ‘Products and Services’:
















Select ‘Compute Engine → VM instances’
  • Click ‘Create instance’
  • Give it the coolest name you can think of
  • Select the zone closest to where you are located
  • Under ‘Machine Type’, choose “micro (1 shared CPU)”. Watch the cost per month drop like a stone!
  • Under ‘Firewall’, check ‘Allow HTTP traffic’

Keep everything else as default and click Create.Your Debian VM should start in about 5-10 seconds.


2. Set up OpenUI5 on the new image:

SAP has an open-source version of its SAPUI5 that is the basis for its Fiori Front-End Server called OpenUI5.OpenUI5 comes with a number of sample apps. Let’s deploy this to a local node.js webserver on the instance.
Install nodejs and npm (node package manager):
sudo apt-get update
curl -sL https://deb.nodesource.com/setup_10.x | sudo -E bash -
sudo apt-get install -y nodejs

SAP files are zipped so install unzip with:
sudo apt-get install unzip

Make a project directory and change to it (feel free to change the name):
mkdir saptest 
cd saptest
Download the latest Stable OpenUI5 SDK from:
https://openui5.org/download.html
eg.,
wget https://www.google.com/url?q=https://openui5.hana.ondemand.com/downloads/openui5-sdk-1.54.6.zip&sa=D&source=hangouts&ust=1529597279793000&usg=AFQjCNHiQIJnKJVJyacNwVjl_6dogj-ejQ
Time to grab a coffee as the download may take about 5 to 10 minutes depending on your connection speed.
Extract the zip file to your project directory with:
unzip openui5-sdk-1.54.5.zip
Next we will set up a local static node.js http server to serve up requests running on port 8888. Download static_server.js and package.json from Github into your project folder:
curl -O
https://raw.githubusercontent.com/htammen/static_server/master/static_server.js
curl -O
https://raw.githubusercontent.com/htammen/static_server/master/package.json
(https://github.com/htammen/static_server)
Identify your primary working directory and create a symbolic link to your resources folder. This allows the demo apps to work out of the box without modification (adjust the path to match your own):
pwd
ln -s /home/<me>/saptest/resources resources 
Call the node package manager to install the http server:
npm install
Run the node.js static server to accept http requests:
node static_server.js
Your node server should now be running and be able to serve up SAPOpenUI5 sample applications from localhost. However, we should make this testable from outside the VM (e.g., mobile) so let’s set up a firewall rule to allow traffic to our new static server on port 8888.
In the GCP Console click on ‘Products and Services’ (top left)
Networking → VPC Networking → Firewall Rules.
Click New to create a new firewall rule and enter the following settings:


Name:
allow-nodeserver
Network:
default
Priority
1000
Direction
Ingress
Action on Match
Allow
Targets
All instances on network
Source filter
IP ranges
Source IP ranges
0.0.0.0/0
Specified Protocols and ports
tcp:8888

Now, click ‘Create’.
Go to Products and Services → Compute Engine → VM instances and copy the External IP. Open up a browser and navigate to:
http://<External IP>:8888/index.html 
Congratulations! You are now running the OpenUI5 front-end on your GCP instance.

3. Explore the OpenUI5 demo apps

You can take a look at the sample applications offered un OpenUI5 by clicking on ‘Demo Apps’ or you can navigate directly to the shopping cart application with:
http://<External IP>:8888/test-resources/sap/m/demokit/cart/webapp/index.html
(Pro-Tip: email this link to yourself and open on your mobile device to see the adaptable UI in action. Really cool.)
These demo apps are just connecting to local sample data in XML files. In the real world oData is often used. oData is a great way of connecting your front-end systems to backend SAP systems. This can be activated on your SAP Gateway. Please consult your SAP documentation setting this up.
SAPUI5 has even more capabilities than OpenUI5 (e.g. charts and micro graphs). This is available either in your SAP Deployment or on the SAP Cloud Platform. In addition, you can also leverage this on top of GCP via Cloud Foundry. Learn more here.
Good luck in your coding adventures!

References and other links

This awesome blog was the baseline of this tutorial:
https://blogs.sap.com/2015/09/25/running-ui5-apps-on-local-nodejs-server/

Some other good links to check out:
https://openui5.org/getstarted.html
https://blogs.sap.com/2017/03/13/how-to-consume-an-odata-service-with-openui5-sapui5/
https://blogs.sap.com/2015/07/15/sapui5-vs-fiori/
https://blogs.sap.com/2015/05/11/s4-hana-delivers-the-netweaver-vision-of-a-bimodal-it/

Labelling and grouping your Google Cloud Platform resources



Do you run and administer applications on Google Cloud Platform (GCP)? Do you need to group or classify your GCP resources to satisfy compliance demands? Do you need to manage traffic going to and from a VM, monitor specific resources, or see those resources by billing account? If you answered yes to any of these questions, you’ll be glad to know that GCP provides multiple ways to annotate your resources, to make them easier to track: security marks, labels and network tags.

While each annotation has different functionality and scope, they are not mutually exclusive and you will often use a combination of them to meet your requirements. To help you choose which annotation, when, take a look at this flowchart.

Let’s take a deeper look at each of these types of annotations.

Annotation type: Security marks

Security marks, or marks, provide you a way to annotate assets and then search, select, or filter using the mark via Cloud Security Command Center (Cloud SCC)

Use cases:
Here are the main use cases for security marks:
  • classifying and organizing assets and findings independent of resource-level labelling mechanisms, including multi-parented groupings
  • enabling tracking of violation severity and priority
  • integrating with workflow systems for assignment and resolution of incidents
  • enabling differentiated policy enforcement on resources, projects or groups of projects
  • enhancing security focused insights into your resources, e.g., clarifying which publicly accessible buckets are within policy and which are not
Note that Cloud Labels and Tags for the associated supported resources also appear and are indexed by Cloud SCC, so that you can use automation logic to create/modify marks based on the values of existing resource labels and tags.

How to use:
Marks are key-value pairs that are supported by a number of resources. They provide a security-focused view of the supported resources and are only visible from Cloud SCC. Edit or view access to the inventory of resources in Cloud SCC and their associated marks requires the securityCenter.editor IAM role, independently of the roles and permissions on the underlying resource. You can set marks at the org level, project level or for individual resources that support marks. To work with marks you can use, cURL, REST API, the Cloud SCC python library or the Cloud SCC asset inventory page by selecting the resources you wish to apply a mark to, then adding the key-value pair items.

What you can annotate:
A valid mark meets the following criteria:
  • While in Alpha, keys must have a minimum length of 1 character and a maximum length of 63 characters, and cannot be empty. In the Beta and GA releases keys will be extended to support up to 256 characters and value can then have a maximum of 4096 characters.
  • Keys and values can contain only lowercase letters, numeric characters, underscores, and dashes. All characters must use UTF-8 encoding, and can include international characters.
You can find an up-to-date list of resources that can be annotated using marks here.

Annotation type: Labels

Labels are key-value pairs that are supported by a number of GCP resources. You can use labels to track your spend in exported billing data. You can also use labels to filter and group resources for other use cases, for example, to identify all those resources that are in a test environment, as opposed to those in production.

Here’s a list of all the things you can do with labels:
  • Identify resources used by individual teams or cost centers
  • Distinguish deployment environments (prod,stage, qa, test)
  • Identify owners, state labels.
  • Use for cost allocation and billing breakdowns.
  • Monitor resource groups via Stackdriver, which can use labels accessible in the resource metadata
How to use:
A valid label meets the following criteria:
  • Each label must be a key-value pair.
  • Keys must have a minimum length of 1 character and a maximum length of 63 characters, and cannot be empty. Values can be empty, and have a maximum length of 63 characters.
  • Keys and values can contain only lowercase letters, numeric characters, underscores, and dashes. All characters must use UTF-8 encoding, and can include international characters.
  • The key portion of a label must be unique. However, you can use the same key with multiple resources. Keys must start with a lowercase letter or international character.
Check the supported resources to learn how to apply labels and to what you can apply them. For instance, BigQuery lets you add labels to your datasets, tables, and views, while Cloud Storage allows you to add labels to buckets. You can add labels to projects but not to folders.

The permissions you need to add labels to resources are determined on a product-by-product basis. For example, BigQuery requires the bigquery.datasets.update permission to modify labels on datasets. The owner of the dataset has this permission by default but you can also assign at the project level the predefined IAM roles bigquery.dataOwner and bigquery.admin, which include this permission. You can also add labels to tables and views; this action requires the bigquery.tables.update permission. Assigning the predefined IAM roles at project level bigquery.dataOwner, bigquery.dataEditor or bigquery.admin grants this permission. The owner of the dataset has full permissions over the tables and views that the dataset contains.

What you can label:
An up-to-date list of GCP products that support labels can be found here. Then, drill down into each product’s documentation for more details.

Note that you can label instances, but if you are annotating these to manage network traffic, you should use tags instead (see below).

Annotation type: Network tags

Network tags apply to instances and are the means for controlling network traffic to and from a VM instance. On GCP networks, tags identify which VM instances are subject to firewall rules and network routes. You can use the tags as source and destination values in firewall rules. For routes, tags are used to identify to which instances a certain route applies.

How to use:
Using tags means you can create additional isolation between subnetworks by selectively allowing only certain instances to communicate. If you arrange for all instances in a subnetwork to share the same tag, you can specify that tag in firewall rules to simulate a per-subnetwork firewall. For example if you have a subnet called ‘subnet-a’, you can tag all instances in subnet-a with the tag ‘my-subnet-a’, and use that tag in firewall rules as a source or destination.

Tags can be added or removed from an instance using gcloud commands, Cloud Console or API calls. The following gcloud command will add the tags ‘production’ and ‘web’ to an instance
gcloud compute instances add-tags [INSTANCE_NAME] --tags production,web
You can set firewall rules using gcloud commands and the Console. The following gcloud command sets a firewall rule using tags in the source and destination.It allows traffic from instances tagged web-production to instances tagged log-data via tcp port 443
gcloud compute firewall-rules create web-logdata \
    --network logging-network \
    --allow TCP:443 \
    --source-tags web-production \
    --target-tags log-data
For routes, tags are used to identify which instances a certain route applies to. For example, you might create a route that applies to all VM instances that have been tagged with the string vpn. You can set routes using gcloud commands or the Console. The following gcloud command creates a route called my-route in a network called my-network that restricts the route to only apply to instances tagged ‘web-prod’ or ‘api-gate-prod’.

gcloud compute routes create my-route --destination-range 10.0.0.0/16 \
--network my-network [--tags=web-prod,api-gate-prod]
Network tags can however be modified by anyone in your org who has the Compute InstanceAdmin role in the project the network was created in. You can create a custom role with more restricted permissions that disable the ability to set tags on instances by removing the compute.instances.setTag permission from the Compute InstanceAdmin role. Instead of using tags and custom roles to prevent developers from adjusting tags (and thus enabling a firewall rule on their instances), use service accounts; Unless they have access to the appropriate centrally managed service accounts they will be unable to modify the rule. Refer to service accounts vs tags to determine whether the restrictions when using service accounts for firewall rules are acceptable.

Tags values have to meet the following criteria:
  • Can be no longer than 63 characters each
  • Can only contain lowercase letters, numeric characters, and dashes
  • Must start and end with either a number or a lowercase character.

Marks, labels and network tags at a glance

To recap, here’s a table that summarizes common use cases and their associated annotations.

Use Case
Annotation(s) required
Notes
Taking inventory of GCP resources
Security marks
Labels
Labels currently support a wider range of resources than Security marks. The Cloud SCC view of your resources includes, as properties of the resource, the labels and tags you’ve applied; you can then further apply ACL’d security marks to control the organization and filtering of the super set of resources, tags and labels.
Classifying or grouping GCP resources and data into logical groups such as dev or production environments for non-ACL’d use cases
Labels

Classify or grouping GCP resources and data into logical groups such as dev or production environments for security use cases
Security marks
Use these when you want the control of the groups to be at either an organization level or specifically not in the control of the resource owner.
Grouping and classifying sensitive resources and/or organize resources for attribution in security use cases
Security marks
Reading & setting marks is restricted to the Cloud SCC specific roles
Billing break down and cost allocation
Labels

Network traffic management to and from instances
Tags
Network tags can be modified by anyone in your org who has the Compute InstanceAdmin role
Monitoring groupings of related resources for Operational tasks
Labels

Used with Stackdriver resource groups
Monitoring of groupings of related resources and/or findings for security risk assessments, vulnerability management and threat detection
Security marks
Used in Cloud SCC

On your mark, get set, go

If you manage a big, complex environment, you know how hard it can be to keep track of all your GCP resources. We hope that security marks, labels and network tags can make that task a little bit easier. For more information on tracking resources in GCP, check out this how-to guide on creating and managing labels. And watch this space for more insider tips on managing GCP resources like a pro.

Doing DevOps in the cloud? Help us serve you better by taking this survey



The promise of higher service availability, reduced costs, and faster delivery is driving organizations to adopt public cloud services at a rapid pace. This includes organizations in highly regulated domains like financial services, healthcare, and government. However, the benefits promised by a well-designed cloud platform cannot be achieved with poor implementations that simply move traditional data center operations to the cloud, with no other changes in practices or systems architecture.

But we can’t improve what we don’t understand, and to this end we’re working with DevOps Research (DORA) on a project to help us all level up. To get better as an industry, we have to understand the use of cloud services and the impact of these practices on our ability to deliver high-quality software with speed and stability. Many of us have seen this first-hand. To get involved in the project, please take our survey.

I love working with developers and operators because I’ve seen how DevOps and Continuous Delivery practices work across multiple industries. How changing tooling can change culture, and create an environment of respect, accountability, and truth-telling. In so many organizations, the migration to the cloud isn’t just about changing where a workload runs. It’s an attempt to change IT culture, and DevOps practices are at the center of that transformation.

And now, we’d like to hear from you: what’s important to you when it comes to implementing and managing cloud services? We’ve teamed with DORA to research the impact of DevOps practices on organizations. And we need your insights and expertise to help shape this work. (Also, by participating, you could win some great prizes from DevOps.)

In collaboration with DORA, this program will study development and delivery practices that help make cloud computing reliable, available, and secure. We’re asking for 20 minutes of your time so we can better understand which practices are most effective for delivering software on cloud infrastructure. Click here to take the survey.

Time to “Hello, World”: VMs vs. containers vs. PaaS vs. FaaS



Do you want to build applications on Google Cloud Platform (GCP) but have no idea where to start? That was me, just a few months ago, before I joined the Google Cloud compute team. To prepare for my interview, I watched a bunch of GCP Next 2017 talks, to get up to speed with application development on GCP.

And since there is no better way to learn than by doing, I also decided to build a “Hello, World” web application on each of GCP’s compute offerings—Google Compute Engine (VMs), Google Kubernetes Engine (containers), Google App Engine (PaaS), and Google Cloud Functions (FaaS). To make this exercise more fun (and to do it in a single weekend), I timed things and took notes, the results of which I recently wrote up in a lengthy Medium post—check it out if you’re interested in following along and taking the same journey. 

So, where do I run my code?


At a high level, though, the question of which compute option to use is... it depends. Generally speaking, it boils down to thinking about the following three criteria:
  1. Level of abstraction (what you want to think about)
  2. Technical requirements and constraints
  3. Where your team and organization are going
Google Developer Advocate Brian Dorsey gave a great talk at Next last year on Deciding between Compute Engine, Container Engine, App Engine; here’s a condensed version:


As a general rule, developers prefer to take advantage of the higher levels of compute abstraction ladder, as it allows us to focus on the application and the problem we are solving, while avoiding undifferentiated work such as server maintenance and capacity planning. With Cloud Functions, all you need to think about is code that runs in response to events (developer's paradise!). But depending on the details of the problem you are trying to solve, technical constraints can pull you down the stack. For example, if you need a very specific kernel, you might be down at the base layer (Compute Engine). (For a good resource on navigating these decision points, check out: Choosing the right compute option in GCP: a decision tree.)

What programming language should I use?

GCP broadly supports the following programming languages: Go, Java, .NET, Node.js, PHP, Python, and Ruby (details and specific runtimes may vary by the service). The best language is a function of many factors, including the task at hand as well as personal preference. Since I was coming at this with no real-world backend development experience, I chose Node.js.

Quick aside for those of you who might be not familiar with Node.js: it’s an asynchronous JavaScript runtime designed for building scalable web application back-ends. Let’s unpack this last sentence:

  • Asynchronous means first-class support for asynchronous operations (compared to many other server-side languages where you might have to think about async operations and threading—a totally different mindset). It’s an ideal fit for most cloud applications, where a lot of operations are asynchronous. 
  • Node.js also is the easiest way for a lot of people who are coming from the frontend world (where JavaScript is the de-facto language) to start writing backend code. 
  • And there is also npm, the world’s largest collection of free, reusable code. That means you can import a lot of useful functionality without having to write it yourself.


Node.js is pretty cool, huh? I, for one, am convinced!

On your mark… Ready, set, go!

For my interview prep, I started with Compute Engine and VMs first, and then moved up the levels of compute service-abstraction ladder, to Kubernetes Engine and containers, App Engine and apps, and finally Cloud Functions. The following table provides a quick summary along with links to my detailed journey and useful getting started resources.


Getting from point A to point B
Time check and getting started resources
Compute Engine

Basic steps:
  1. Create & set up a VM instance
  2. Set up Node.js dev environment
  3. Code “Hello, World”
  4. Start Node server
  5. Expose the app to external traffic
  6. Understand how scaling works

4.5 hours

Kubernetes Engine

Basic steps:
  1. Code “Hello, World”
  2. Package the app into a container
  3. Push the image to Container Registry
  4. Create a Kubernetes cluster
  5. Expose the app to external traffic
  6. Understand how scaling works

6 hours

App Engine

Basic steps:
  1. Code “Hello, World”
  2. Configure an app.yaml project file
  3. Deploy the application
  4. Understand scaling options

1.5-2 hours

Cloud Functions

Basic steps:
  1. Code “Hello, World”
  2. Deploy the application

15 minutes



Time-to-results comparison

Although this might be somewhat like comparing apples and oranges, here is a summary of my results. (As a reminder, this is just in the context of standing up a “Hello, World” web application from scratch, all concerns such as running the app in production aside.)

Your speed-to-results could be very different depending on multiple factors, including your level of expertise with a given technology. My goal was to grasp the fundamentals of every option in the GCP’s compute stack and assess the amount of work required to get from point A to point B… That said, if there is ever a cross-technology Top Gear fighter jet vs. car style contest on standing up a scalable HTTP microservice from scratch, I wouldn’t be afraid to take on a Kubernetes grandmaster like Kelsey Hightower with Cloud Functions!

To find out more about application development on GCP, check out Computing on Google Cloud Platform. Don’t forget—you get $300 in free credits when you sign up.

Happy building!

Further reading on Medium:

Improving application availability with Alias IPs, now with hot standby



High availability and redundancy are essential features for a cloud deployment. On Google Cloud Platform (GCP), Alias IPs allow you to configure secondary IPs or IP ranges on your virtual machine (VM) instances, for a secure and highly scalable way to deliver traffic to your applications. Today, we’re excited to announce that you can now dynamically add and remove alias IP ranges for existing, running VMs, so that you can migrate your applications from one VM to another in the event of a software or machine failure.

In the past, you could deploy highly available applications on GCP by using static routes that point to a hosted Virtual IP (VIP), plus adjusting the next-hop VM of that VIP destination based on availability of the application hosting the VM. Now, Alias IP ranges support hot standby availability deployments, including multiple standbys for a single VM (one-to-many), as well as multiple standbys for multiple VMs (many-to-many).
With this native support, you can now rely on GCP’s IP address management capabilities to carve out flexible IP ranges for your VMs. This delivers the following benefits over high-availability solutions that use static routes:
  • Improved security: Deployments that use Alias IP allow us to apply anti-spoofing checks that validate the source and destination IP, and allow any traffic with any source or destination to be forwarded. In contrast, static routes require that you disable anti-spoof protection for a VM.
  • Connectivity through VPN / Google Cloud Interconnect: Highly available application VIPs implemented as Alias IP addresses can be announced by Cloud Router via BGP to an on-premises network connected via VPN or Cloud Interconnect. This is important if you are accessing the highly available application from your on-premises data center.
  • Native access to Google services like Google Cloud Storage, BigQuery and any other managed services from googleapis.com. By using Alias IP, highly available applications get native access to these services, avoiding bottlenecks created by an external NAT proxy.
Let’s take a look at how you can configure floating Alias IPs.

Imagine you need to configure a highly available application that requires machine state to be constantly synced, for example between a database master and slave running on VMs in your network. Using Internal Load Balancing doesn’t help here since the traffic needs to be sent to only one server. With Alias IPs, you can configure your database to run using secondary IPs on the VM's primary network interface. In the event of a failure, you can dynamically switch this IP to be removed from the bad VM and attach it to the new server.

This approach is also be useful if an application in your virtual network needs to be accessed across regions, since Internal Load Balancing currently only supports only in-region access.

You can use Alias IP from the gcloud command line interface.

To migrate 10.10.0.5 from VM-A to VM-B
a) Remove the IP from VM-A

gcloud compute instances network-interfaces update \
     virtual-machine-a --zone us-central1-a --aliases 
b) Add the IP to VM-B

gcloud compute instances network-interfaces update \
     virtual-machine-b --zone us-central1-a \
     --aliases “range1:10.10.0.5”
In addition to adding and removing alias IPs from running VMs, you can create up to 10 Alias IP ranges per network interface, including up to seven secondary interfaces attached to other networks.
You can also use Alias IP with applications running within containers and being managed by container orchestration systems such as Kubernetes or Mesos. Click here to learn more about how Kubernetes uses Alias IPs.

Being able to migrate your workloads while they are running goes a long way toward ensuring high availability for your applications. Drop us a line about how you use Alias IPs, and other networking features you’d like to see on GCP.