Tag Archives: Partners

Solution guide: backing up Windows files using CloudBerry Backup with Google Cloud Storage



Modern businesses increasingly depend on their data as a foundation for their operation. The more critical the reliance is on that data, the more important it is to ensure that data is protected with backups. Unfortunately, even by taking regular backups, you're still susceptible to data loss from a local disaster or human error. Thus, many companies entrust their data to geographically distributed cloud storage providers like Google Cloud Platform (GCP). And when they do, they want convenient cloud backup automation tools that offer flexible backup options and quick on-demand restores.

One such tool is CloudBerry Backup (CBB), and has the following capabilities:

  • Creating incremental data copies with low impact on production workloads
  • Data encryption on all transferring paths
  • Flexible retention policy, allowing you to balance the volume of data stored and storage space used
  • Ability to carry out hybrid restores with the use of local and cloud storage resources

CBB includes a broad range of features out of the box, allowing you to address most of your cloud backup needs, and is designed to have low impact on production servers and applications.

CBB has a low-footprint backup client that you install on the desired server. After you provision a Google Cloud Storage bucket, attach it to CBB and create a backup plan to immediately start protecting your files in the cloud.

To simplify your cloud backup onboarding, check out the step-by-step tutorial on how to use CloudBerry Backup with Google Cloud Storage and easily restore any files.

Collaborating with Coursera to address the cloud skills gap



As more and more companies wish to take advantage of what cloud computing, data analytics and machine learning can do for their businesses, the gap between the knowledge needed to move to the cloud and the demand for such skills has grown enormously. Lack of expertise and cloud skills is often cited as the top challenge for companies wishing to migrate their business to the cloud.

To address this need, we’re collaborating with Coursera, a leading online education platform, to launch a series of on-demand Google Cloud Platform training offerings. Developed and taught by Google experts, these courses range in skills levels from beginner to advanced and include topics like cloud fundamentals, operations, security, data analytics and machine learning. Now with just a few clicks anyone in the world can get trained on Google Cloud Platform (GCP).

This collaboration is part of our ongoing effort, including our recent acquisition of training platform Qwiklabs, to provide learning experiences to customers in the ways that work best for them, be that the classroom, on-demand or a blended version of the two. In addition to Coursera, we are also working with a global network of instructor-led classroom training providers.

Visit the Coursera/Google catalogue to start learning today and watch for more courses and specializations this year. We're working with Coursera to add to the course catalogue in the coming months.

Partnering on open source: Google and HashiCorp engineers on managing GCP infrastructure



Earlier in January, we shared the first episode of a video mini-series highlighting how the Google Cloud Graphite team is making open source software work great with the Google Cloud Platform (GCP). Today, we’re kicking off the next chapter of the series, featuring HashiCorp’s open-source DevOps tools and how to use them with GCP.

HashiCorp open source tools simplify application delivery, helping users provision, secure and run infrastructure for any applications. We kick off the series with a high-level overview, featuring Kelsey Hightower, Staff Developer Advocate for GCP, and Armon Dadgar, CTO and co-founder of HashiCorp.


Then, for our next installment, we show HashiCorp and GCP in action. Imagine a small, independent game studio working on its next title  a retro 1980s style arcade game updated for multiplayer and playable over the web. Watch as the team engages in collaborative development, demos the game to their CEO and deploys it for public release. Along the way, we feature:
  • Vagrant, which allows developers to create repeatable development environments to be used by any member of a team without consulting operators. Vagrant can easily spin up remote VMs on Google Compute Engine and allows developers shared access to the same VM  ideal for collaborative development.
  • Packer, which with a single configuration file, produces machine images for many target environments, including Compute Engine. The ease with which Packer images can be easily described and built make it an ideal fit with DevOps concepts such as immutable infrastructure and continuous delivery.
  • Terraform, which helps operators safely and predictably create, modify and destroy production infrastructure. It codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed and versioned. Operators can thus manage GCP resources spanning many products  key when provisioning scalable production infrastructure.
Join us on YouTube to watch other episodes that will cover topics including using machine images to deploy or using infrastructure as code to manage resources. Follow Google Cloud on YouTube, or @GoogleCloud on Twitter to find out when new videos are published. And stay tuned for more blog posts and videos about work we’re doing with open-source providers like Puppet, Chef, Cloud Foundry, Red Hat, SaltStack and others.

No-cost VM migration to Google Cloud Platform now available with CloudEndure



When Google Cloud meets with large customers, it’s clear they're managing years of investment in on-prem tools and applications — many of them still mission-critical. One of the most common questions we field is how to get the cost, performance and innovation of cloud using the talent and tech they have today.

We're committed to a seamless transition to the cloud, and in an effort to facilitate our customers’ journeys, we’re collaborating with CloudEndure, a cloud migration and disaster recovery provider, to offer a no cost, self-service migration tool for Google Cloud Platform (GCP) customers.

The joint CloudEndure/GCP VM Migration Service allows you to migrate virtual machines and physical servers from their existing environment — whether the source machines are on-prem or already in the cloud — into GCP with near-zero downtime and little to no disruption. The service is offered at no charge, although customers may incur costs for the machines created as well as ephemeral helper instances that orchestrate the migration.

We’ve prepared a tutorial for you on how to use the new CloudEndure migration service here. We’ve also created a webinar that will air March 2 (and on-demand thereafter) that details the solution’s full benefits. We look forward to seeing you in the cloud!

Guest post: building IoT applications with MQTT and Google Cloud Pub/Sub



[Editor’s note: Today we hear from Agosto, a Google Cloud Premier Partner that has been building products and delivering services on Google Cloud Platform (GCP) since 2012, including Internet of Things applications. Read on to learn about Agosto’s work to build an MQTT service broker for Google Cloud Pub/Sub, and how you can incorporate it into your own IoT applications.]

One of our key practice areas is Internet of Things (IoT). Using the many components of GCP, we’ve helped customers rapidly move their ideas from product concept to launch.

Along the way, we evaluated several IoT platforms and repeatedly came to the conclusion that we’d be better off staying on the GCP stack than a single IoT platform with costly licensing hooks and closed-source practices. Our clients also like being able to build scalable, functional prototypes using pre-existing and standard reference architectures and tools.

One of the many challenges we faced along the way was picking an efficient transport for two-way messaging between “things” and GCP. After evaluating a number of emerging and mature protocols, we settled on Message Queuing Telemetry Transport (MQTT). The MQTT protocol has been around since the early 2000’s and is now an ISO Standard. Originated in 1999 by Andy Stanford-Clark and Arlen Nipper, it's lightweight, has solid documentation and has tens of thousands of production deployments. Furthermore, many existing pre-IoT or “Machine to Machine” projects already use MQTT as their transport from embedded device to the back-office. With MQTT, we’ve been able to increase velocity and reduce complexity for our IoT products and services.

MQTT is a great transport protocol, but it can be challenging to manage at scale, particularly when it comes to scaling message storage and delivery systems. As one of the earliest Google partners to develop a set of reusable tools, reference architectures and methods for accelerating IoT products to market, we’ve been impressed with Google Cloud Pub/Sub, a durable, low-latency and scalable service for handling many-to-many asynchronous messaging. But Cloud Pub/Sub uses HTTPS to transfer data. Over numerous small requests, all those HTTP headers add up to a lot of extra data  a no-go when you’re dealing with a constrained device that communicates over a mobile network, and where you pay for each byte in mobile data charges, battery usage  or both.

We needed to bridge the gap between IoT-connected devices and Cloud Pub/Sub, and began investigating ways to connect MQTT to Cloud Pub/Sub using and extending RabbitMQ.

After initial load tests showed this approach was viable, Google asked Agosto to develop an open-source, highly performant MQTT connection broker that integrates with Cloud Pub/Sub. With low network overhead (Agosto has seen up to 10x less compared to HTTPS in scenarios we've tested) and high throughput, MQTT is a natural fit for many scenarios.

The resulting message broker integrates messaging between connected devices using a MQTT client and Cloud Pub/Sub; RabbitMQ performs the protocol conversion for two-way messaging between the device and Cloud Pub/Sub. This means administrators of the RabbitMQ compute infrastructure don't have to concern themselves with managing the durability of the data, or scaling storage.

Our message broker can support both small and very large GCP projects. For example, with smaller projects and IoT prototypes, you can rapidly deploy a single node of Agosto’s MQTT to Pub/Sub Connection Broker supporting up to 120,000 messages per minute for as little as $25/month for the compute costs. Larger production deployments with load-balanced brokers can support millions of concurrent connections and much higher throughput.

Download the broker, follow the instructions and learn more about leveraging MQTT and GCP for your IoT project.
GitHub: https://github.com/Agosto/gcp-iot-adapter

And if you're looking for a more customized implementation of our MQTT to Pub/Sub Connection broker, visit our website to learn more about our offerings.

Expanding our IDE support with a new Eclipse plugin for App Engine


Eclipse is one of the most popular IDEs for Java developers. Today, we're launching the beta version of Cloud Tools for Eclipse, a plugin that extends Eclipse to Google Cloud Platform (GCP). Based on Google Cloud SDK, the initial feature set targets App Engine standard environment, including support for creating applications, running and debugging them inside the IDE with the Eclipse Web Tools Platform tooling and deploying them to production.

You may be wondering how this plugin relates to the Google Plugin for Eclipse, which was launched in 2009. The older plugin is focused on a broader set of technologies than just GCP. Moreover, support for the Eclipse Web Tools Platform and Maven is spotty at best. Moving forward, we'll invest in building more cloud-related tooling in Cloud Tools for Eclipse.

Cloud Tools for Eclipse is available for Eclipse 4.5 (Mars) and Eclipse 4.6 (Neon) and can be installed through the Eclipse Update Manager. The plugin source code is available on GitHub, and we welcome contributions and reports of issues from the community.

First, install the Cloud Tools for Eclipse plugin. To verify that the plugin has installed correctly, launch Eclipse and look at the bottom right hand side of the window -- you should see a Google “G” Icon. Click on this icon to login to your Google account.

Now we'll demonstrate how to create and deploy a simple Maven-based "Hello World" App Engine standard environment application. First, create a new App Engine project from Cloud Console. (If this is your first time using GCP, we recommend signing up for our Free Trial first.) When you see this card, click Create a project:
You should then land on the following cards:
Every GCP project has a unique project ID. You’ll need this string later, so let’s grab that. On the left hand nav, click on Home and copy the project ID as shown below.

Now that you have an App Engine project, you're ready to deploy a simple Hello World application. Open Eclipse and click on File > New > Project and type “Maven-based Google” in the Wizards section, then select the following:
Fill in the Maven group ID and artifact ID and click Next:
In the next page, select the Hello World template and click Finish.
Now, right click on your project in the Project Explorer and select Run As > App Engine. You should now see your application running locally shortly on localhost. In the output terminal in Eclipse, the correct URL is hyperlinked.

Once you've finished running the application locally, you can deploy it to the cloud. Right-click on your application in the Eclipse Project Explorer and select Deploy to App Engine Standard. You'll see the following dialog if you're logging in for the first time. Click on the Account drop-down and proceed with the web browser UI to link the plugin for your GCP Account.
Once signed in, enter the Project ID of the application you created in Cloud Console and leave the rest as is. This is the ID you wrote down earlier.
Click Deploy to upload the finished project to App Engine. Status updates appear in the Eclipse console as files are uploaded. When the deployment finishes, the URL of the deployed application is shown in the Eclipse console. That’s it!

You can check the status of your application in the Cloud Console by heading to the App Engine tab and clicking on Instances to see the underlying infrastructure of your application.

We'll continue to add support for more GCP services to the plugin, so stay tuned for update notifications in the IDE. If you have specific feature requests, please submit them in the GitHub issue tracker.

To learn more about Java on GCP, visit the GCP Java developers portal, where you can find all the information you need to run your Java applications on GCP.

Happy Coding!

P.S. IntelliJ users, see here for the Cloud Tools for IntelliJ plugin.

Solution guide: creating self-service IT environments with CloudBolt



IT organizations want to realize the cost and speed benefits of cloud, but can’t afford to throw away years of investment in tools, talent and governance processes they’ve built on-prem. Hybrid models of application management have emerged as a way to get the best of both worlds.

Development and test (dev/test) environments help teams create different environments to support the development, testing, staging and production of enterprise applications. Working with CloudBolt Software, we’ve prepared a full tutorial guide that describes how to quickly provision these environments in a self-service capacity, while maintaining full control over governance and policies required by enterprise IT.

CloudBolt isn’t just limited to dev/test workloads, but anything your team runs on VMs. As a cloud management platform that integrates your on-prem virtualization and private cloud resources with the public cloud, CloudBolt serves as a bridge between your existing infrastructure and Google Cloud Platform (GCP). Developers within your organization can provision the resources they need through an intuitive self-service portal, while IT maintains full control over how these provisioned environments are configured, helping them reap the cost and agility benefits of GCP using the development tools and processes they’ve built up over the years. It’s also an elegant way to rein in VM sprawl, helping organizations manage the ad-hoc environments that spring up with new projects. CloudBolt even provides a way to automatically scan and discover VMs in both on-prem and cloud environments.

Teams can get started immediately with this self-service tutorial. Or join us for our upcoming webinar featuring CloudBolt’s CTO Bernard Sanders and Google’s Product Management lead for Developer Tools on January 26th. Don’t hesitate to reach out to us to explore which enterprise workloads make the most sense for your cloud initiatives.

Google Analytics Breakthrough: From Zero to Business Impact

Looking to sharpen your Google Analytics skills as you kick off 2017? A new full-color book is now available for analysts, marketers, front-end developers, managers, and anyone who seeks to strengthen their Google Analytics skills.


"In Google Analytics Breakthrough: From Zero to Business Impact, we strive to provide a step-by-step resource to help readers build a solid foundation for analytics competence. It starts at strategy and core concepts, extends to advanced reporting and integration techniques, and covers all the nuts, bolts, tricks, gaps, and pitfalls in between," says coauthor Feras Alhlou, Co-founder and Principal Consultant of E-Nor. "The book is structured to offer a succinct overview of each topic and allow more detailed exploration as the reader chooses."

 The book includes contributions straight from the Google team.  Avinash Kaushik's foreword starts things off with a constructive mindset, and Paul Muret's cover piece takes a unique perspective on the evolution of Google Analytics from the days of Urchin.  Krista Seiden lends her top reporting tips, and Dan Stone shares insights on remarketing. Industry experts such as Jim Sterne, Brian Clifton, and Simo Ahava also offer key takeaways.

At nearly 600 pages, the book is quite comprehensive, but the authors outline a few main themes below.

1- Define and Measure Success 
It still bears repeating: identify your KPIs as part of your measurement strategy. Map your marketing and development initiatives to the KPIs and center your analytics around your success metrics and specific improvement targets. You'll be much more likely to drive, detect, and repeat your wins, both big and small, if you always know what you're aiming for.

2- Keep Your Focus on User Journey 
This has multiple meanings. From a Google Analytics reporting standpoint, take advantage of the reports and features - such as Multi-Channel Funnel reports, custom segments, custom funnels in Analytics 360, and calculated metrics - that go beyond session scope and begin to approach a more complete picture of user journey.

Even more fundamentally, remember to always relate your data to user experience.  Contributor Meta Brown offers specific advice on crafting a hero story to make your analytics data more accessible and impactful for all stakeholders.

 3- Take Full Advantage of Google Tag Manager 
"When we were first outlining the book, we briefly considered dual-track native and Google Tag Manager examples ," recollects coauthor Eric Fettman, Senior Consultant and Analytics Coach at E-Nor. "Shiraz steered us to a basically GTM-only approach, which  streamlined the implementation chapters and really highlighted GTM's flexibility and power."

In addition to in-depth discussions about GTM's triggers, variables, and data layer, the book examines the relatively new and perhaps underutilized Environments feature.  While the publication schedule didn't allow direct inclusion of GTM Workspaces, the supplemental online materials offer a detailed Workspaces walkthrough. 

As an illustration of Google Tag Manager's flexibility, this Lookup Table variable will allow a single Google Analytics tag to populate into different properties based on hostname.

4- Help Google Analytics Tell Stories in Your Own Language 
From both an implementation and reporting standpoint, Google Analytics provides a range of capabilities for customizing your data set and optimizing the reporting experience so your data speaks clearly and relevantly.  Custom dimensions and data import for your content, products, and back-end user classifications will let you build more meaningful and actionable narratives. Custom channels - for paid social traffic, as an example - will certainly yield much greater insights than default channel reporting. Alternate report displays and custom reports allow you to combine and isolate the metrics that are most important for the analysis at hand.

And don't fail to take full advantage of basic features such as secondary dimensions. The Landing Pages report is good by default; Landing Pages with Source/Medium applied as a secondary dimension might reveal a whole new secret.

5- Master the Basics for Advanced Benefits in GA 360, BigQuery & Integration 
The fundamental Google Analytics data collection and processing tactics remain as important in 2017 as ever.  You still need to implement event tracking, with a meaningful naming convention, to really understand user interaction. You must maintain consistency in campaign tagging for clarity in your Acquisition reports. In many cases, you still must apply view settings and/or filters to insure data quality in all of your Google Analytics reports.

The benefits of clean data, however, extend beyond the Google Analytics user interface.  If you're exporting to BigQuery (integration with BigQuery is enabled for Analytics 360 organizations) to analyze conversions over multiple sessions by different traffic channels, the campaign tagging and channel grouping work that you have already performed for your GA reporting will again prove critical.  If you're also pulling your CRM data into BigQuery to integrate with GA data and measure the effect of specific interactions – such as downloads, video views, or live chats – on customer lifetime value, you'll be doubly glad that you took the time to properly implement your Google Analytics event tracking from the start.

Going forward, as we begin to navigate through dynamic visualizations in Google Data Studio and look towards advanced solutions such as Attribution 360 and Audience 360, the competitive advantage of good, consolidated Google Analytics data, as a dataset for complementary tools and environments, will only magnify.

"We dedicate the book to our contributors, to our clients, to the team at E-Nor, and especially to our coauthor and E-Nor cofounder Shiraz Asif, who passed away in March 2016 and will always be keenly missed." For more about Google Analytics Breakthrough: From Zero to Business Impact, visit www.gabreakthrough.com.

Posted by Feras Alhlou,  Principal Consultant and Co-founder of E-Nor, Inc., Google Analytics Partner

Partnering on open source: Google and Pivotal engineers talk Cloud Foundry on GCP




Today we’re sharing the first episode of our Pivotal Cloud Foundry on Google Cloud Platform (GCP) mini video series, featuring engineers from the Pivotal and Google Cloud Graphite teams who've been working hard to make this open-source platform run great on GCP. Google’s Cloud Graphite team works exclusively on open source projects in collaboration with project maintainers and customers. We’ll have more videos and blog posts this year, just like this one, highlighting that work.

In 2016 we began working with Pivotal, and announced back in October that customers can deploy and operate Pivotal Cloud Foundry on GCP. Thanks to this partnership, companies in industries like manufacturing, healthcare and retail can accelerate their digital transformation and run cloud-native applications on the same kind of infrastructure that powers Gmail, YouTube, Google Maps and more.
“The chemistry between the two engineering teams was remarkable as if we had been working together for years. The Cloud Foundry community is already benefiting from this work. It’s simple to deploy Cloud Foundry atop Google’s infrastructure, and developers can easily extend their apps with Google’s analytics and machine learning services. We look forward to working with Google in the future to advance our shared vision for multi-cloud choice and flexibility.”  Joshua McKenty, Head of Platform Ecosystem, Pivotal
Specifically, together with Pivotal, we have:
  • Brought BOSH to GCP, adding support for Google’s global networking and load balancer, quick VM boot times, live migration and preemptible VM pricing
  • Built a service broker to let Cloud Foundry developers easily use Google services such as Google BigQuery, Google Cloud SQL and Google Cloud Machine Learning in their apps
  • Developed the stackdriver-tools BOSH release to give operators and developers access to health and diagnostics information in Stackdriver Logging and Stackdriver Monitoring
In the first episode of the video series, Dan Wendorf of Pivotal and I talk about deploying BOSH and Cloud Foundry to GCP, using the tutorial you can follow along with on GitHub.

Join us on YouTube to watch other episodes that will cover topics like setting up and consuming Google services with our Service Broker, or how to monitor and debug Cloud Foundry applications with Stackdriver. Just follow Google Cloud on YouTube, or @GoogleCloud on Twitter to find out when new videos are published. And stay tuned for more blog posts and videos about the work we’re doing with Puppet, Chef, HashiCorp, Red Hat, SaltStack and others.

Stackdriver Trace + Zipkin: distributed tracing and performance analysis for everyone



Editor's Note: You can now use Zipkin tracers with Stackdriver Trace. Go here to get started.

Part of the promise of the Google Cloud Platform is that it gives developers access to the same tools and technologies that we use to run at Google-scale. As the evolution of our Dapper distributed tracing system, Stackdriver Trace is one of those tools, letting developers analyze application latency and quickly isolate the causes of poor performance. While it was initially focused on Google App Engine projects, Stackdriver Trace also supports applications running on virtual machines or containers via instrumentation libraries for Node.js, Java, and Go (Ruby and .Net support will be available soon), and also through an API. Trace is available at no charge for all projects, and our instrumentation libraries are all open source with permissive licenses.

Another popular distributed tracing system is Zipkin, which Twitter open-sourced in 2012. Zipkin provides a plethora of instrumentation libraries for capturing traces from applications, as well as a backend system for storing and presenting traces through a web interface. Zipkin is widely used; in addition to Twitter, Yelp and Salesforce are major contributors to the project, and organizations around the world use it to view and diagnose the performance of their distributed services.

Zipkin users have been asking for interoperability with Stackdriver Trace, so today we’re releasing a Zipkin server that allows Zipkin-compatible clients to send traces to Stackdriver Trace for analysis.

This will be useful for two groups of people: developers whose applications are written in a language or framework that Stackdriver Trace doesn’t officially support, and owners of applications that are currently instrumented with Zipkin who want access to Stackdriver Trace’s advanced analysis tools. We’re releasing this code open source on GitHub with a permissive license, as well as a container image for quick set-up.
As described above, the new Stackdriver Trace Zipkin Connector is a drop-in replacement for an existing Zipkin backend and continues to use the same Zipkin-compatible tracers. You no longer need to set up, manage or maintain a Zipkin backend. Alternatively, you can run the new collector on each service that's instrumented with Zipkin tracers.

There are currently Zipkin clients available for Java, .Net, Node.js, Python, Ruby and Go, with built-in integration to a variety of popular web frameworks.

Setup Instructions

Read the Using Stackdriver with Zipkin Collector guide to configure and collect trace data from your distributed tracer. If you're not already using a tracer client, you can find one in a list of the most popular Zipkin tracers.

FAQ

Q: What does this announcement mean if I’ve been wanting to use Stackdriver Trace but it doesn’t yet support my language?

If a Zipkin tracer supports your chosen language and framework, you can now use Stackdriver Trace by having the tracer library send its data to the Stackdriver Trace Zipkin Collector.

Q: What does this announcement mean if I currently use Zipkin?

You’re welcome to set up the Stackdriver Trace Zipkin server and use it in conjunction with or as a replacement for your existing Zipkin backend. In addition to displaying traces, Stackdriver Trace includes advanced analysis tools like Insights and Latency Reports that will work with trace data collected from Zipkin tracers. As Stackdriver Trace is hosted by Google, you'll not need to maintain your own backend services for trace collection and display.
Latency reports are available to all Stackdriver Trace customers

Q: What are the limitations of using the Stackdriver Trace Zipkin Collector?
This release has two known limitations:
  1. Zipkin tracers must support the correct Zipkin time and duration semantics.
  2. Zipkin tracers and the Stackdriver Trace instrumentation libraries can’t append spans to the same traces, meaning that traces that are captured in one library won’t contain spans for services instrumented in the other type of library. For example:
  3. In this example, requests made to the Node.js web application will be traced with the Zipkin library and sent to Stackdriver Trace. However, these traces do not contain spans generated within the API application or for the RPC calls that it makes to the Database. This is because Zipkin and Stackdriver Trace use different formats for propagating trace context between services. 
    For this reason we recommend that projects wanting to use Stackdriver Trace either exclusively use Zipkin-compatible tracers along with the Zipkin Connector, or use instrumentation libraries that work natively with Stackdriver Trace (like the official Node.js, Java or Go libraries).

Q: Will this work as a full Zipkin server?

No, as the initial release only supports write operations. Let us know if you think that adding read operations would be useful, or submit a pull request through GitHub.

Q: How much does Stackdriver Trace cost?

You can use Zipkin with Stackdriver Trace at no cost.

Q: Can I use Stackdriver Trace to analyze my AWS, on-premises, or hybrid applications or is it strictly for services running on Google Cloud Platform?

Several projects already do this today! Stackdriver Trace will analyze all data submitted through its API, regardless of where the instrumented service is hosted, including traces and spans collected from the the Stackdriver Trace instrumentation libraries or through the Stackdriver Trace Zipkin Connector.

Wrapping up

We here on the Stackdriver team would like to send out a huge thank you to Adrian Cole of the Zipkin open source project. Adrian’s enthusiasm, technical assistance, design feedback and help with the release process have been invaluable. We hope to expand this collaboration with Zipkin and other open source projects in the future. A huge shout out is also due to the developers on the Stackdriver team who developed this feature.

Like the Stackdriver Trace instrumentation libraries, the Zipkin Connector has been published on GitHub under the Apache license. Feel free to file issues there or submit pull requests for proposed changes.