Category Archives: Google for Work Blog

Work is going Google

Sell smarter with ProsperWorks for G Suite

If you want to scale your business, you’ve likely invested in a CRM solution to manage sales workflows and speed up data-driven decision-making - but, CRMs have become a clunky epicenter for team collaboration. You need actionable data insights to drive deals forward, which often require a CRM tool that integrates with the apps you use every day. ProsperWorks for G Suite can help:

With ProsperWorks for G Suite, it’s simple to integrate your CRM with the tools you already use, like Gmail, Calendar and Docs. You can:

  • Access everything in one place—forget toggling back and forth between your CRM and G Suite applications
  • Automatically sync Google Contacts in ProsperWorks
  • View and track sales activity in real-time directly within Gmail
  • Export data from Sheets to ProsperWorks and get insights instantly without manual data entry
  • Create custom dashboards, reports and charts using the Google Sheets integration in the ProsperWorks CRM Custom Report Builder

Why UrbanVolt chose ProsperWorks for G Suite

UrbanVolt, an energy-saving firm based in Dublin, Ireland, installs LED lighting for businesses at no upfront cost (“lighting as a service”). This leasing model allowed the company to scale rapidly, but it also meant managing a higher volume of inbound leads. “We needed a solution that would allow us to scale our inbounds and deal flow with ease,” says Edel Kennedy, Head of Marketing at UrbanVolt.

The UrbanVolt team opted for ProsperWorks for its intuitive design and its seamless integration with G Suite. “ProsperWorks was the clear choice for our team. There was no learning curve since it worked with G Suite, where we spend the majority of our day,” says Kennedy.

Now, UrbanVolt employees save time because they don’t have to toggle between their CRM and spreadsheets to analyze data. Instead, they use G Suite tools like Sheets Add-on for ProsperWorks to view opportunities at various stages in the sales cycle, and create advanced dashboards, reports, charts and graphs collaboratively.

If you want to get started using ProsperWorks for G Suite at your business, sign up for a free webinar on Wednesday, June 21, 2017 at 9 a.m. PT / 12 p.m. ET.

Source: Google Cloud


Visualize data instantly with machine learning in Google Sheets

Sorting through rows and rows of data in a spreadsheet can be overwhelming. That’s why today, we’re rolling out new features in Sheets that make it even easier for you to visualize and share your data, and find insights your teams can act on.

Ask and you shall receive → Sheets can build charts for you

Explore in Sheets, powered by machine learning, helps teams gain insights from data, instantly. Simply ask questions—in words, not formulas—to quickly analyze your data. For example, you can ask “what is the distribution of products sold?” or “what are average sales on Sundays?” and Explore will help you find the answers.  

Now, we’re using the same powerful technology in Explore to make visualizing data even more effortless. If you don’t see the chart you need, just ask. Instead of manually building charts, ask Explore to do it by typing in “histogram of 2017 customer ratings” or “bar chart for ice cream sales.” Less time spent building charts means more time acting on new insights.

Sheets GIF

Instantly sync your data from Sheets → Docs or Slides

Whether you’re preparing a client presentation or sharing sales forecasts, keeping up-to-date data is critical to success, but it can also be time-consuming if you need to update charts or tables in multiple sources. This is why we made it easier to programmatically update charts in Docs and Slides last year.   

Now, we’re making it simple to keep tables updated, too. Just copy and paste data from Sheets to Docs or Slides and tap the “update” button to sync your data.

Sheets bundle - still

Even more Sheets updates

We’re constantly looking for ways to improve our customers’ experience in Sheets. Based on your feedback, we’re rolling out more updates today to help teams get work done faster:

  • Keyboard shortcuts: Change default shortcuts in your browser to the same spreadsheet shortcuts you’re already used to. For example, delete a row quickly by using “Ctrl+-.”  
  • Upgraded printing experience: Preview Sheet data in today’s new print interface. Adjust margins, select scale and alignment options or repeat frozen rows and columns before you print your work.
  • Powerful new chart editing experience: Create and edit charts in a new, improved sidebar. Choose from custom colors in charts or add additional trendlines to model data. You can also create more chart types, like 3D charts. This is now also available for iPhones and iPads
  • More spreadsheet functions: We added new functions to help you find insights, bringing the total function count in Sheets to more than 400. Try “SORTN,” a function unique to Sheets, which can show you the top three orders or best-performing months in a sales record spreadsheet. Sheets also support statistical functions like “GAMMADIST,” “F.TEST” and “CHISQ.INV.RT.”

These new features in Sheets are rolling out starting today. Learn how Sheets can help you find valuable insights.

Source: Google Cloud


Keeping your company data safe with new security updates to Gmail

Keeping company data secure is priority one, and that starts with protecting the tools that your employees use every day. We’re constantly adding security features to help businesses stay ahead of potential threats, and are excited to announce new security features for Gmail customers, including early phishing detection using machine learning, click-time warnings for malicious links, unintended external reply warnings and built-in defenses against new threats.

New machine learning models in Gmail to block phishing

Machine learning helps Gmail block sneaky spam and phishing messages from showing up in your inbox with over 99.9 percent accuracy. This is huge, given that 50-70 percent of messages that Gmail receives are spam. We’re continuing to improve spam detection accuracy with early phishing detection, a dedicated machine learning model that selectively delays messages (less than 0.05 percent of messages on average) to perform rigorous phishing analysis and further protect user data from compromise.

Our detection models integrate with Google Safe Browsing machine learning technologies for finding and flagging phishy and suspicious URLs. These new models combine a variety of techniques such as reputation and similarity analysis on URLs, allowing us to generate new URL click-time warnings for phishing and malware links. As we find new patterns, our models adapt more quickly than manual systems ever could, and get better with time.
Gmail security - still

New warnings for employees to prevent data loss 

When employees are empowered to make the right decisions to protect data, it can improve an enterprise’s security posture. To help with this, Gmail now displays unintended external reply warnings to users to help prevent data loss. Now, if you try to respond to someone outside of your company domain, you’ll receive a quick warning to make sure you intended to send that email. And because Gmail has contextual intelligence, it knows if the recipient is an existing contact or someone you interact with regularly, to avoid displaying warnings unnecessarily.

Gmail Security - GIF

Protecting your business with the latest security advancements

Security threats are constantly evolving and we’re always looking for ways to help people protect their data. With new built-in defenses against ransomware and polymorphic malware, Gmail now blocks millions of additional emails that can harm users. We classify new threats by combining thousands of spam, malware and ransomware signals with attachment heuristics (emails that could be threats based on signals) and sender signatures (already marked malware).

Outside of today’s updates, here are a few other security advancements we’ve made within Gmail to make sure you stay protected:

Whirlpool, PWC and Woolworths are just a few companies that rely on Gmail to securely collaborate. Learn more.

Source: Google Cloud


Let’s jam—Jamboard is now available

Good ideas become great ones when you work together with your teammates. But as teams become increasingly distributed, you need tools that spur visual creativity and collaboration—a way to sketch out ideas, rev on them with colleagues no matter where they may be in the world and make them real. That’s where Jamboard, our cloud-based, collaborative whiteboard, can help. Starting today, Jamboard is available for purchase in the United States.

Breaking down creative barriers

We tested Jamboard with enterprise early adopters like Dow Jones, Whirlpool and Pinterest, who shared how Jamboard helped their businesses collaborate more efficiently and bring the power of the cloud into team brainstorms. 

Shaown Nandi, chief information officer at Dow Jones, saw his teams became more hands-on in creative sessions thanks to Jamboard. “Jamboard breaks down barriers to interactive, visual collaboration across teams everywhere,” said Nandi. “It’s the perfect anchor for a meeting and encourages impromptu, productive sessions. We can easily add any content to the Jamboard to capture great ideas from everyone. We immediately saw the benefits.”

Jamboard is the perfect anchor for a meeting. We can easily add content and capture great ideas from everyone. Shaown Nandi Chief Information Officer at Dow Jones
Jamboard GA image 1

We received great suggestions from customers on how to make Jamboard even better, such as adding a greater range of secure Wi-Fi network configurations so it’s easier to jam in different business settings. Customers also confirmed how important high speed touch is when using a digital whiteboard, and we’re using the Nvidia Jetson TX1 embedded computer to make sure Jamboard’s 4K touchscreen delivers a responsive experience. Starting today, you can purchase a Jamboard in three colors: cobalt blue, carmine red and graphite grey.

Jamboard Image 1

Order Jamboard today 

You can purchase Jamboard for $4,999 USD, which includes two styluses, an eraser and a wall mount. We’re also running a promotion—if you order on or before September 30, 2017, you’ll receive $300 off of the annual management and support fee for the first year, as well as a discount on the optional rolling stand.

Keep in mind that a G Suite plan is required to use Jamboard so that you can access files from Drive, use them in your brainstorms and come back to your work later. Plus, the Jamboard mobile companion apps can be used remotely so you can work on the go. Also, we’re teaming up with BenQ to handle fulfillment, delivery and support. Check out pricing details below.

Jamboard pricing - correct

Jamboard is available to G Suite customers in the U.S. to start, and will be available for purchase in the U.K. and Canada this summer, with more countries becoming available over time. Contact your Google Cloud sales rep or visit google.com/jamboard to learn more about how you can start jamming with colleagues today.

If you’re a current G Suite admin, check out this post for more information.

Source: Google Cloud


AI in the newsroom: What’s happening and what’s next?

Bringing people together to discuss the forces shaping journalism is central to our mission at the Google News Lab. Earlier this month, we invited Nick Rockwell, the Chief Technology Officer from the New York Times, and Luca D’Aniello, the Chief Technology Officer at the Associated Press, to Google’s New York office to talk about the future of artificial intelligence in journalism and the challenges and opportunities it presents for newsrooms.

The event opened with an overview of the AP's recent report, "The Future of Augmented Journalism: a guide for newsrooms in the age of smart machines,” which was based on interviews with dozens of journalists, technologists, and academics (and compiled with the help of a robot, of course). As early adopters of this technology, the AP highlighted a number of their earlier experiments:

Boxing match image captured by one of AP’s AI-powered cameras
This image of a boxing match was captured by one of AP’s AI-powered cameras.
  • Deploying more than a dozen AI-powered robotic cameras at the 2016 Summer Olympics to capture angles not easily available to journalists
  • Using Google’s Cloud Vision API to classify and tag photos automatically throughout the report
  • Increasing news coverage of quarterly earnings reports from 400 to 4,000 companies using automation

The report also addressed key concerns, including risks associated with unchecked algorithms, potential for workflow disruption, and the growing gap in skill sets.

Here are three themes that emerged from the conversation with Rockwell and D’Aniello:

1. AI will increase a news organization's ability to focus on content creation

D’Aniello noted that journalists, often “pressed for resources,” are forced to “spend most of their time creating multiple versions of the same content for different outlets.” AI can reduce monotonous tasks like these and allow journalists to to spend more of their time on their core expertise: reporting.

For Rockwell, AI could also be leveraged to power new reporting, helping journalists analyze massive data sets to surface untold stories. Rockwell noted that “the big stories will be found in data, and whether we can find them or not will depend on our sophistication using large datasets.”

2. AI can help improve the quality of dialogue online and help organizations better understand their readers' needs.

Given the increasing abuse and harassment found in online conversations, many publishers are backing away from allowing comments on articles. For the Times, the Perspective API tool developed by Jigsaw (part of Google’s parent company Alphabet), is creating an opportunity to encourage constructive discussions online by using machine learning to increase the efficiency of comment moderation. Previously, the Times could only moderate comments on 10 percent of articles. Now, the technology has allowed them to allow commenting on all articles.

The Times is also thinking about using AI to increase the relevance of what they deliver to readers. As Rockwell notes, “Our readers have always looked to us to filter the world, but to do that only through editorial curation is a one-size-fits-all approach. There is a lot we can do to better serve them.”

3. Applying journalistic standards is essential to AI’s successful implementation in newsrooms

Both panelists agreed that the editorial standards that go into creating quality journalism should be applied to AI-fueled journalism. As Francesco Marconi, the author of the AP report, remarked, “Humans make mistakes. Algorithms make mistakes. All the editorial standards should be applied to the technology.”

Here are a few approaches we’ve seen for how those standards can be applied to the technology:

  • Pairing up journalists with the tech. At the AP, business journalists trained software to understand how to write an earnings report.
  • Serving as editorial gatekeepers. News editors should play a role in synthesizing and framing the information AI produces.
  • Ensuring more inclusive reporting. In 2016, Google.org, USC and the Geena Davis Foundation used machine learning to create a tool that collects data on gender portrayals in media.

What’s ahead

What will it take for AI to be a positive force in journalism? The conversation showed that while the path wasn’t certain, getting to the right answers would require close collaboration between the technology industry, news organizations, and journalists.

“There is a lot of work to do, but it’s about the mindset,” D’Aniello said. “Technology was seen as a disruptor of the newsroom, and it was difficult to introduce things. I don’t think this is the case anymore. The urgency and the need is perceived at the editorial level.”

We look forward to continuing to host more conversations on important topics like this one. Learn more about the Google News Lab on our website.

Header image of robotic camera courtesy of Associated Press.

Source: Google Cloud


Build and train machine learning models on our new Google Cloud TPUs

We’re excited to announce that our second-generation Tensor Processing Units (TPUs) are coming to Google Cloud to accelerate a wide range of machine learning workloads, including both training and inference. We call them Cloud TPUs, and they will initially be available via Google Compute Engine.

We’ve witnessed extraordinary advances in machine learning (ML) over the past few years. Neural networks have dramatically improved the quality of Google Translate, played a key role in ranking Google Search results and made it more convenient to find the photos you want with Google Photos. Machine learning allowed DeepMind’s AlphaGo program to defeat Lee Sedol, one of the world’s top Go players, and also made it possible for software to generate natural-looking sketches.

These breakthroughs required enormous amounts of computation, both to train the underlying machine learning models and to run those models once they’re trained (this is called “inference”). We’ve designed, built and deployed a family of Tensor Processing Units, or TPUs, to allow us to support larger and larger amounts of machine learning computation, first internally and now externally.

While our first TPU was designed to run machine learning models quickly and efficiently—to translate a set of sentences or choose the next move in Go—those models still had to be trained separately. Training a machine learning model is even more difficult than running it, and days or weeks of computation on the best available CPUs and GPUs are commonly required to reach state-of-the-art levels of accuracy.

Research and engineering teams at Google and elsewhere have made great progress scaling machine learning training using readily-available hardware. However, this wasn’t enough to meet our machine learning needs, so we designed an entirely new machine learning system to eliminate bottlenecks and maximize overall performance. At the heart of this system is the second-generation TPU we're announcing today, which can both train and run machine learning models.

tpu-v2-hero
Our new Cloud TPU delivers up to 180 teraflops to train and run machine learning models.

Each of these new TPU devices delivers up to 180 teraflops of floating-point performance. As powerful as these TPUs are on their own, though, we designed them to work even better together. Each TPU includes a custom high-speed network that allows us to build machine learning supercomputers we call “TPU pods.” A TPU pod contains 64 second-generation TPUs and provides up to 11.5 petaflops to accelerate the training of a single large machine learning model. That’s a lot of computation!

Using these TPU pods, we've already seen dramatic improvements in training times. One of our new large-scale translation models used to take a full day to train on 32 of the best commercially-available GPUs—now it trains to the same accuracy in an afternoon using just one eighth of a TPU pod.

tpu-v2-1
A “TPU pod” built with 64 second-generation TPUs delivers up to 11.5 petaflops of machine learning acceleration.

Introducing Cloud TPUs

We’re bringing our new TPUs to Google Compute Engine as Cloud TPUs, where you can connect them to virtual machines of all shapes and sizes and mix and match them with other types of hardware, including Skylake CPUs and NVIDIA GPUs. You can program these TPUs with TensorFlow, the most popular open-source machine learning framework on GitHub, and we’re introducing high-level APIs, which will make it easier to train machine learning models on CPUs, GPUs or Cloud TPUs with only minimal code changes.

With Cloud TPUs, you have the opportunity to integrate state-of-the-art ML accelerators directly into your production infrastructure and benefit from on-demand, accelerated computing power without any up-front capital expenses. Since fast ML accelerators place extraordinary demands on surrounding storage systems and networks, we’re making optimizations throughout our Cloud infrastructure to help ensure that you can train powerful ML models quickly using real production data.

Our goal is to help you build the best possible machine learning systems from top to bottom. While Cloud TPUs will benefit many ML applications, we remain committed to offering a wide range of hardware on Google Cloud so you can choose the accelerators that best fit your particular use case at any given time. For example, Shazam recently announced that they successfully migrated major portions of their music recognition workloads to NVIDIA GPUs on Google Cloud and saved money while gaining flexibility.

Introducing the TensorFlow Research Cloud

Much of the recent progress in machine learning has been driven by unprecedentedly open collaboration among researchers around the world across both industry and academia. However, many top researchers don’t have access to anywhere near as much compute power as they need. To help as many researchers as we can and further accelerate the pace of open machine learning research, we'll make 1,000 Cloud TPUs available at no cost to ML researchers via the TensorFlow Research Cloud.

Sign up to learn more

If you’re interested in accelerating training of machine learning models, accelerating batch processing of gigantic datasets, or processing live requests in production using more powerful ML models than ever before, please sign up today to learn more about our upcoming Cloud TPU Alpha program. If you’re a researcher expanding the frontier of machine learning and willing to share your findings with the world, please sign up to learn more about the TensorFlow Research Cloud program. And if you’re interested in accessing whole TPU pods via Google Cloud, please let us know more about your needs.

Source: Google Cloud


Build and train machine learning models on our new Google Cloud TPUs

We’re excited to announce that our second-generation Tensor Processing Units (TPUs) are coming to Google Cloud to accelerate a wide range of machine learning workloads, including both training and inference. We call them Cloud TPUs, and they will initially be available via Google Compute Engine.

We’ve witnessed extraordinary advances in machine learning (ML) over the past few years. Neural networks have dramatically improved the quality of Google Translate, played a key role in ranking Google Search results and made it more convenient to find the photos you want with Google Photos. Machine learning allowed DeepMind’s AlphaGo program to defeat Lee Sedol, one of the world’s top Go players, and also made it possible for software to generate natural-looking sketches.

These breakthroughs required enormous amounts of computation, both to train the underlying machine learning models and to run those models once they’re trained (this is called “inference”). We’ve designed, built and deployed a family of Tensor Processing Units, or TPUs, to allow us to support larger and larger amounts of machine learning computation, first internally and now externally.

While our first TPU was designed to run machine learning models quickly and efficiently—to translate a set of sentences or choose the next move in Go—those models still had to be trained separately. Training a machine learning model is even more difficult than running it, and days or weeks of computation on the best available CPUs and GPUs are commonly required to reach state-of-the-art levels of accuracy.

Research and engineering teams at Google and elsewhere have made great progress scaling machine learning training using readily-available hardware. However, this wasn’t enough to meet our machine learning needs, so we designed an entirely new machine learning system to eliminate bottlenecks and maximize overall performance. At the heart of this system is the second-generation TPU we're announcing today, which can both train and run machine learning models.

tpu-v2-hero
Our new Cloud TPU delivers up to 180 teraflops to train and run machine learning models.

Each of these new TPU devices delivers up to 180 teraflops of floating-point performance. As powerful as these TPUs are on their own, though, we designed them to work even better together. Each TPU includes a custom high-speed network that allows us to build machine learning supercomputers we call “TPU pods.” A TPU pod contains 64 second-generation TPUs and provides up to 11.5 petaflops to accelerate the training of a single large machine learning model. That’s a lot of computation!

Using these TPU pods, we've already seen dramatic improvements in training times. One of our new large-scale translation models used to take a full day to train on 32 of the best commercially-available GPUs—now it trains to the same accuracy in an afternoon using just one eighth of a TPU pod.

tpu-v2-1
A “TPU pod” built with 64 second-generation TPUs delivers up to 11.5 petaflops of machine learning acceleration.

Introducing Cloud TPUs

We’re bringing our new TPUs to Google Compute Engine as Cloud TPUs, where you can connect them to virtual machines of all shapes and sizes and mix and match them with other types of hardware, including Skylake CPUs and NVIDIA GPUs. You can program these TPUs with TensorFlow, the most popular open-source machine learning framework on GitHub, and we’re introducing high-level APIs, which will make it easier to train machine learning models on CPUs, GPUs or Cloud TPUs with only minimal code changes.

With Cloud TPUs, you have the opportunity to integrate state-of-the-art ML accelerators directly into your production infrastructure and benefit from on-demand, accelerated computing power without any up-front capital expenses. Since fast ML accelerators place extraordinary demands on surrounding storage systems and networks, we’re making optimizations throughout our Cloud infrastructure to help ensure that you can train powerful ML models quickly using real production data.

Our goal is to help you build the best possible machine learning systems from top to bottom. While Cloud TPUs will benefit many ML applications, we remain committed to offering a wide range of hardware on Google Cloud so you can choose the accelerators that best fit your particular use case at any given time. For example, Shazam recently announced that they successfully migrated major portions of their music recognition workloads to NVIDIA GPUs on Google Cloud and saved money while gaining flexibility.

Introducing the TensorFlow Research Cloud

Much of the recent progress in machine learning has been driven by unprecedentedly open collaboration among researchers around the world across both industry and academia. However, many top researchers don’t have access to anywhere near as much compute power as they need. To help as many researchers as we can and further accelerate the pace of open machine learning research, we'll make 1,000 Cloud TPUs available at no cost to ML researchers via the TensorFlow Research Cloud.

Sign up to learn more

If you’re interested in accelerating training of machine learning models, accelerating batch processing of gigantic datasets, or processing live requests in production using more powerful ML models than ever before, please sign up today to learn more about our upcoming Cloud TPU Alpha program. If you’re a researcher expanding the frontier of machine learning and willing to share your findings with the world, please sign up to learn more about the TensorFlow Research Cloud program. And if you’re interested in accessing whole TPU pods via Google Cloud, please let us know more about your needs.

Source: Google Cloud


Save time with Smart Reply in Gmail

It’s pretty easy to read your emails while you’re on the go, but responding to those emails takes effort. Smart Reply, available in Inbox by Gmail and Allo, saves you time by suggesting quick responses to your messages. The feature already drives 12 percent of replies in Inbox on mobile. And starting today, Smart Reply is coming to Gmail for Android and iOS too. 

Smart Reply suggests three responses based on the email you received:

Smart Reply in Gmail on iOS

Once you’ve selected one, you can send it immediately or edit your response starting with the Smart Reply text. Either way, you’re saving time.

Smart Reply in Gmail on Android

Smart Reply utilizes machine learning to give you better responses the more you use it. So if you're more of a “thanks!” than a “thanks.” person, we'll suggest the response that's, well, more you! If you want to learn about the smarts behind Smart Reply, check out the Google Research Blog.

Smart Reply will roll out globally on Android and iOS in English first, and Spanish will follow in the coming weeks. Stay tuned for more languages coming soon!

Source: Google Cloud


Delivering on our partnership with SAP

At Next ‘17, we announced a new partnership with SAP, focused on integrating our industry-leading cloud solutions with SAP enterprise applications. This week we’re at the SAP SAPPHIRE NOW event in Orlando to talk about the significant progress we’ve made over the last two months. We’re collaborating with SAP to create solutions that can help accelerate the digital transformation for enterprises by combining the power of SAP applications like SAP S/4HANA and the cutting-edge innovation available on Google Cloud in the following areas.

SAP on GCP

SAP NetWeaver-based applications are now certified on GCP

We’re announcing the certification of SAP NetWeaver technology platform on Google Cloud Platform (GCP), which enables customers to run products like SAP S/4HANA, SAP BW/4HANA, SAP Business Suite and SAP Business Warehouse, on GCP.

sovanta, a German technology company, is one of the first customers to run SAP S/4HANA on GCP infrastructure to help transform their operations, grow quickly and transition from on-premises to cloud.

Expanding the certification of SAP HANA on Google Cloud Platform  

We’ve completed the SAP HANA certification for 416GB GCP VMs and another certification for scale-out SAP HANA with four VMs, which enables enterprise customers with ever-growing volumes of business data to scale SAP applications on our cloud infrastructure.

Smyths Toys, one of the fastest growing toy retailers in the U.K. and Ireland, depends on the reliability and performance of Google Cloud to run their ecommerce platform powered by SAP Hybris.

"We chose Google Cloud for the price and performance of the infrastructure and the future-proofing we get with its innovative capabilities, including machine learning and data analytics services. The partnership with Google Cloud and SAP will help us further integrate our business systems and drive efficiency and value for our company," says Rob Wilson, the CTO of Smyths Toys.

Availability of SAP Analytics Cloud connector for BigQuery

With the addition of a native connector to BigQuery, it’s easier than ever for joint customers to discover, predict and share meaningful business insights across data in SAP systems and Google BigQuery.

Machine learning, data custodian and G Suite

Data custodian demos  

Google and SAP have collaborated on an innovative approach to address enterprise concerns around data protection and privacy while continuing to offer enterprises the flexibility and power of Google’s cloud platform. In the Google booth, at SAPPHIRE NOW, we have demos showcasing our vision around how enterprises can leverage SAP’s expertise and partnership with Google to gain significantly greater visibility into how their data is managed, accessed and protected on GCP.

Machine learning innovation

We’re working together with SAP to build intelligent applications combining SAP’s business process expertise with our machine learning services, such as Google Translate API, Speech APIs, Cloud ML Engine and the open source machine learning framework TensorFlow. To spur innovation, we’ve jointly announced an Intelligent App Challenge. The competition invites SAP and Google ecosystem partners to build applications using SAP HANA, express edition on GCP.

G Suite integrations

We’re continuing to implement our joint vision with SAP around future integrations with key SAP solutions in addition to existing integrations between G Suite and SAP solutions like SAP Anywhere, Concur and BusinessObjects Lumira.

For those attending SAP’s SAPPHIRE NOW event, stop by the Google Cloud booth, #1153, for additional details and to see demos in action.

Source: Google Cloud


Four reasons your company should use the new Team Drives

1. Team Drives makes onboarding new hires easier.

When onboarding new team members, it can take weeks, and sometimes months, before those employees become productive. This may partly be due to having limited access to training materials and project information. With Team Drives, new members get instant access to the right documents, so the time it takes to ramp up is dramatically decreased and they can dive straight into work.

2. Files stay in Team Drives even if team members leave.

Determining file ownership when an employee leaves can be a major pain point for a lot of companies. Files in Team Drives belong to the team instead of an individual, so you no longer have to worry about tracking down and transferring information once an employee leaves. The files stay within Team Drives so that your team can continue to share information and workflows aren’t interrupted.

3. It’s easy to manage and share permissions for employees and admins.

If you’re a large organization, keeping track of your data is critical. You need tools that can help you manage access to ensure that only the right people are sharing information. Team Drives make it easy for employees to manage file access. Team Drives allows you to specialize permissions based on who you’d like to edit, comment, reorganize or delete certain files. By default, all members within Team Drives automatically see the same files regardless of who adds or reorganizes them—cutting back on how many times you have to grant file access to trusted teammates.

Before employees get started using Team Drives, admins can adjust permissions in the G Suite Admin Console, like enabling Team Drives for an entire domain or just specific organizational units. Plus, admins can add or remove members to Team Drives as necessary and easily edit permissions.

4. Team Drives uses machine learning to help you find files. 

There are more than 800 million monthly active users on Drive and trillions of files stored in Drive. Many of these files represent collective knowledge of employees, and having “quick” access to these files is a boon for productivity.

Before, Enterprise Knowledge Management solutions attempted to deliver the right files to employees at the right time, but this required manually tagging documents with metadata—a time-consuming process. Now, you can use Quick Access, a feature in Drive that uses powerful machine learning algorithms to analyze trending topics, team calendars and other contextual information to identify relevant documents and suggest files to users. 

Use this step-by-step guide to get started on Team Drives today.

Source: Google Cloud