Tag Archives: privacy

Outline: secure access to the open web

Censorship and surveillance are challenges that many journalists around the world face on a daily basis. Some of them use a virtual private network (VPN) to provide safer access to the open internet, but not all VPNs are equally reliable and trustworthy, and even fewer are open source.

That’s why Jigsaw created Outline, a new open source, independently audited platform that lets any organization easily create and operate their own VPN.

Outline’s most striking feature is arguably how easy it is to use. An organization starts by downloading the Outline Manager app, which lets them sign in to DigitalOcean, where they can host their own VPN, and set it up with just a few clicks. They can also easily use other cloud providers, provided they have shell access to run the installation script. Once an Outline server is set up, the server administrator can create access credentials and share with their network of contacts, who can then use the Outline clients to connect to it.


A core element to any VPN’s security is the protocol that the server and clients use to communicate. When we looked at the existing protocols, we realized that many of them were easily identifiable by network adversaries looking to spot and block VPN traffic. To make Outline more resilient against this threat, we chose Shadowsocks, a secure, handshake-less, and open source protocol that is known for its strength and performance, and enjoys the support of many developers worldwide. Shadowsocks is a combination of a simplified SOCKS5-like routing protocol, running on top of an encrypted channel. We chose the AEAD_CHACHA20_POLY1305 cipher, which is an IETF standard and provides the security and performance users need.

Another important component to security is running up-to-date software. We package the server code as a Docker image, enabling us to run on multiple platforms, and allowing for automatic updates using Watchtower. On DigitalOcean installations, we also enable automatic security updates on the host machine.

If security is one of the most critical parts of creating a better VPN, usability is the other. We wanted Outline to offer a consistent, simple user experience across platforms, and for it to be easy for developers around the world to contribute to it. With that in mind, we use the cross-platform development framework Apache Cordova for Android, iOS, macOS and ChromeOS, and Electron for Windows. The application logic is a web application written in TypeScript, while the networking code had to be written in native code for each platform. This setup allows us to reutilize most of code, and create consistent user experiences across diverse platforms.

In order to encourage a robust developer community we wanted to strike a balance between simplicity, reproducibility, and automation of future contributions. To that end, we use Travis for continuous builds and to generate the binaries that are ultimately uploaded to the app stores. Thanks to its cross-platform support, any team member can produce a macOS or Windows binary with a single click. We also use Docker to package the build tools for client platforms, and thanks to Electron, developers familiar with the server's Node.js code base can also contribute to the Outline Manager application.

You can find our code in the Outline GitHub repositories and more information on the Outline website. We hope that more developers join the project to build technology that helps people connect to the open web and stay more safe online.

By Vinicius Fortuna, Jigsaw

Double Stuffed Security in Android Oreo

Posted by Gian G Spicuzza, Android Security team

Android Oreo is stuffed full of security enhancements. Over the past few months, we've covered how we've improved the security of the Android platform and its applications: from making it safer to get apps, dropping insecure network protocols, providing more user control over identifiers, hardening the kernel, making Android easier to update, all the way to doubling the Android Security Rewards payouts. Now that Oreo is out the door, let's take a look at all the goodness inside.

Expanding support for hardware security

Android already supports Verified Boot, which is designed to prevent devices from booting up with software that has been tampered with. In Android Oreo, we added a reference implementation for Verified Boot running with Project Treble, called Android Verified Boot 2.0 (AVB). AVB has a couple of cool features to make updates easier and more secure, such as a common footer format and rollback protection. Rollback protection is designed to prevent a device to boot if downgraded to an older OS version, which could be vulnerable to an exploit. To do this, the devices save the OS version using either special hardware or by having the Trusted Execution Environment (TEE) sign the data. Pixel 2 and Pixel 2 XL come with this protection and we recommend all device manufacturers add this feature to their new devices.

Oreo also includes the new OEM Lock Hardware Abstraction Layer (HAL) that gives device manufacturers more flexibility for how they protect whether a device is locked, unlocked, or unlockable. For example, the new Pixel phones use this HAL to pass commands to the bootloader. The bootloader analyzes these commands the next time the device boots and determines if changes to the locks, which are securely stored in Replay Protected Memory Block (RPMB), should happen. If your device is stolen, these safeguards are designed to prevent your device from being reset and to keep your data secure. This new HAL even supports moving the lock state to dedicated hardware.

Speaking of hardware, we've invested support in tamper-resistant hardware, such as the security module found in every Pixel 2 and Pixel 2 XL. This physical chip prevents many software and hardware attacks and is also resistant to physical penetration attacks. The security module prevents deriving the encryption key without the device's passcode and limits the rate of unlock attempts, which makes many attacks infeasible due to time restrictions.

While the new Pixel devices have the special security module, all new GMS devices shipping with Android Oreo are required to implement key attestation. This provides a mechanism for strongly attesting IDs such as hardware identifiers.

We added new features for enterprise-managed devices as well. In work profiles, encryption keys are now ejected from RAM when the profile is off or when your company's admin remotely locks the profile. This helps secure enterprise data at rest.

Platform hardening and process isolation

As part of Project Treble, the Android framework was re-architected to make updates easier and less costly for device manufacturers. This separation of platform and vendor-code was also designed to improve security. Following the principle of least privilege, these HALs run in their own sandbox and only have access to the drivers and permissions that are absolutely necessary.

Continuing with the media stack hardening in Android Nougat, most direct hardware access has been removed from the media frameworks in Oreo resulting in better isolation. Furthermore, we've enabled Control Flow Integrity (CFI) across all media components. Most vulnerabilities today are exploited by subverting the normal control flow of an application, instead changing them to perform arbitrary malicious activities with all the privileges of the exploited application. CFI is a robust security mechanism that disallows arbitrary changes to the original control flow graph of a compiled binary, making it significantly harder to perform such attacks.

In addition to these architecture changes and CFI, Android Oreo comes with a feast of other tasty platform security enhancements:

  • Seccomp filtering: makes some unused syscalls unavailable to apps so that they can't be exploited by potentially harmful apps.
  • Hardened usercopy: A recent survey of security bugs on Android revealed that invalid or missing bounds checking was seen in approximately 45% of kernel vulnerabilities. We've backported a bounds checking feature to Android kernels 3.18 and above, which makes exploitation harder while also helping developers spot issues and fix bugs in their code.
  • Privileged Access Never (PAN) emulation: Also backported to 3.18 kernels and above, this feature prohibits the kernel from accessing user space directly and ensures developers utilize the hardened functions to access user space.
  • Kernel Address Space Layout Randomization (KASLR): Although Android has supported userspace Address Space Layout Randomization (ASLR) for years, we've backported KASLR to help mitigate vulnerabilities on Android kernels 4.4 and newer. KASLR works by randomizing the location where kernel code is loaded on each boot, making code reuse attacks probabilistic and therefore more difficult to carry out, especially remotely.

App security and device identifier changes

Android Instant Apps run in a restricted sandbox which limits permissions and capabilities such as reading the on-device app list or transmitting cleartext traffic. Although introduced during the Android Oreo release, Instant Apps supports devices running Android Lollipop and later.

In order to handle untrusted content more safely, we've isolated WebView by splitting the rendering engine into a separate process and running it within an isolated sandbox that restricts its resources. WebView also supports Safe Browsing to protect against potentially dangerous sites.

Lastly, we've made significant changes to device identifiers to give users more control, including:

  • Moving the static Android ID and Widevine values to an app-specific value, which helps limit the use of device-scoped non-resettable IDs.
  • In accordance with IETF RFC 7844 anonymity profile, net.hostname is now empty and the DHCP client no longer sends a hostname.
  • For apps that require a device ID, we've built a Build.getSerial() API and protected it behind a permission.
  • Alongside security researchers1, we designed a robust MAC address randomization for Wi-Fi scan traffic in various chipsets firmware.

Android Oreo brings in all of these improvements, and many more. As always, we appreciate feedback and welcome suggestions for how we can improve Android. Contact us at [email protected].

_____________________________________________________________________

1: Glenn Wilkinson and team at Sensepost, UK, Célestin Matte, Mathieu Cunche: University of Lyon, INSA-Lyon, CITI Lab, Inria Privatics, Mathy Vanhoef, KU Leuven

Identifying Intrusive Mobile Apps using Peer Group Analysis

Posted by Martin Pelikan, Giles Hogben, and Ulfar Erlingsson of Google's Security and Privacy team

Mobile apps entertain and assist us, make it easy to communicate with friends and family, and provide tools ranging from maps to electronic wallets. But these apps could also seek more device information than they need to do their job, such as personal data and sensor data from components, like cameras and GPS trackers.

To protect our users and help developers navigate this complex environment, Google analyzes privacy and security signals for each app in Google Play. We then compare that app to other apps with similar features, known as functional peers. Creating peer groups allows us to calibrate our estimates of users' expectations and set adequate boundaries of behaviors that may be considered unsafe or intrusive. This process helps detect apps that collect or send sensitive data without a clear need, and makes it easier for users to find apps that provide the right functionality and respect their privacy. For example, most coloring book apps don't need to know a user's precise location to function and this can be established by analyzing other coloring book apps. By contrast, mapping and navigation apps need to know a user's location, and often require GPS sensor access.

One way to create app peer groups is to create a fixed set of categories and then assign each app into one or more categories, such as tools, productivity, and games. However, fixed categories are too coarse and inflexible to capture and track the many distinctions in the rapidly changing set of mobile apps. Manual curation and maintenance of such categories is also a tedious and error-prone task.

To address this, Google developed a machine-learning algorithm for clustering mobile apps with similar capabilities. Our approach uses deep learning of vector embeddings to identify peer groups of apps with similar functionality, using app metadata, such as text descriptions, and user metrics, such as installs. Then peer groups are used to identify anomalous, potentially harmful signals related to privacy and security, from each app's requested permissions and its observed behaviors. The correlation between different peer groups and their security signals helps different teams at Google decide which apps to promote and determine which apps deserve a more careful look by our security and privacy experts. We also use the result to help app developers improve the privacy and security of their apps.

Apps are split into groups of similar functionality, and in each cluster of similar apps the established baseline is used to find anomalous privacy and security signals.

These techniques build upon earlier ideas, such as using peer groups to analyze privacy-related signals, deep learning for language models to make those peer groups better, and automated data analysis to draw conclusions.

Many teams across Google collaborated to create this algorithm and the surrounding process. Thanks to several, essential team members including Andrew Ahn, Vikas Arora, Hongji Bao, Jun Hong, Nwokedi Idika, Iulia Ion, Suman Jana, Daehwan Kim, Kenny Lim, Jiahui Liu, Sai Teja Peddinti, Sebastian Porst, Gowdy Rajappan, Aaron Rothman, Monir Sharif, Sooel Son, Michael Vrable, and Qiang Yan.

For more information on Google's efforts to detect and fight potentially harmful apps (PHAs) on Android, see Google Android Security Team's Classifications for Potentially Harmful Applications.

References

S. Jana, Ú. Erlingsson, I. Ion (2015). Apples and Oranges: Detecting Least-Privilege Violators with Peer Group Analysis. arXiv:1510.07308 [cs.CR].

T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, J. Dean (2013). Distributed Representations of Words and Phrases and their Compositionality. Advances in Neural Information Processing Systems 26 (NIPS 2013).

Ú. Erlingsson (2016). Data-driven software security: Models and methods. Proceedings of the 29th IEEE Computer Security Foundations Symposium (CSF'16), Lisboa, Portugal.

Google for Work security: Announcing Data Loss Prevention to wrap up 2015



Every company has data that it must keep secure — whether that data is about confidential innovations, strategic plans or sensitive HR issues — keeping all of your data safe from inadvertent or purposeful leaks needs to be simple, quick and reliable. Google for Work already helps admins manage information security with tools such as encryption, sharing controls, mobile device management and two-factor authentication. However, sometimes user actions compromise the best of all of these controls; for example, a user might hit “Reply all” when meaning to send a private message with sensitive content.

Starting today, if you’re a Google Apps Unlimited customer, Data Loss Prevention (DLP) for Gmail will add another layer of protection to prevent sensitive information from being revealed to those who shouldn’t have it.

How Gmail DLP works

Organizations may have a policy that the Sales department shouldn’t share customer credit card information with vendors. And to keep information safe, admins can easily set up a DLP policy by selecting “Credit Card Numbers” from a library of predefined content detectors. Gmail DLP will automatically check all outgoing emails from the Sales department and take action based on what the admin has specified: either quarantine the email for review, tell users to modify the information or block the email from being sent and notify the sender. These checks don’t just apply to email text, but also to content inside common attachment types ― such as documents, presentations and spreadsheets. And admins can also create custom rules with keywords and regular expressions.


Check out the DLP whitepaper for more information including the full list of predefined content creators, and learn how to get started. Gmail DLP is the first step in a long-term investment to bring rule-based security across Google Apps. We’re working on bringing DLP to Google Drive early next year, along with other rule based security systems.

As we round out the year, let’s take a look at some of the other work we brought to our services in 2015 to enhance the security, privacy and the control you have on your information.

  • To verify the good work we do on privacy, we were one of the first cloud providers to invite an independent auditor to show that our privacy practices for Google Apps for Work and Google Apps for Education comply with the latest ISO/IEC 27018:2014 privacy standards. These confirm for example, that we don’t use customer data for advertising.
  • To make security easier for all, we've expanded our security toolset:
    • We introduced Security Keys to make two-step verification more convenient and provide better protection against phishing. For admins, we released Google Apps identity services, which allows secure single sign on access with SAML and OIDC support and we delivered device (MDM) and app (MAM) Mobile Management across Google Apps.
    • We launched Postmaster tools to help Gmail users better handle large volumes of mail and report spam.
    • For Google Cloud developers, the Cloud Security Scanner allows you to easily scan your application for common vulnerabilities (such as cross-site scripting (XSS) and mixed content).
    • For those who want the power and flexibility of public cloud computing and want to bring their own encryption keys, we announced Customer-Supplied Encryption Keys for Google Cloud Platform.
    • To give more transparency on how email security, even beyond Gmail, is changing over the years we published the Safer Email report.
  • We introduced new sharing features, alerts and audit events to Google Drive for Google Apps Unlimited customers. For example, administrators can now create custom alerts and disable the downloading, printing or copying of files with Information Rights Management (IRM). New sharing settings give employees better control within their organization unit and now admins can let them reset their own passwords.
  • Google Groups audit settings allow better tracking of Groups memberships. For all, the launch of google.com/privacy gives better control over personal data and Android for Work makes it easier to keep personal and work data separate on employee devices.

Companies are moving to the Cloud for all kinds of reasons, but Security and Trust remain critical and predominant differentiators between providers. That’s why millions of businesses trust Google to do the daily heavy lifting in security ─ preventing, testing, monitoring, upgrading and patching, while working towards the future. Because Google was born in the cloud, we’ve built security from the ground up across our entire technology stack, from the data centers to the servers to the services and features we provide across all of your devices. No other Cloud provider can claim this degree of security investment at every single layer.

While 2015 was a great year, there’s a lot more in store for 2016. To learn more about how our technology is evolving, please join us at the Enigma conference in San Francisco on January 25th to discuss electronic crime, security and privacy ideas that matter.

The Time for Reform is Now


As the debate over electronic communications privacy escalates in Congress and around the country, I testified this week before the Senate Judiciary Committee to discuss this very issue. The hearing provided an important opportunity to address users’ very reasonable expectations of privacy when it comes to the content in their email and other online accounts. 

Google strongly supports legislation to update the Electronic Communications Privacy Act (ECPA), which was signed into law almost thirty years ago -- long before email accounts and the Web were part of our daily lives.  As it is currently written, ECPA allows government agencies to compel a provider to disclose the content of communications, like email and photos, without a warrant in some circumstances. This pre-digital era law no longer makes sense: users expect, as they should, that the documents they store online have the same Fourth Amendment protections as they do when the government wants to enter the home to seize documents stored in a desk drawer. 

There is no compelling policy or legal rationale for there to be different rules. Indeed, the law as currently written is unconstitutional, as the Sixth Circuit Federal Court of Appeals held back in 2010 in United States v. Warshak.  Google requires that law enforcement secure a warrant to compel Google to disclose content. 

In spite of the tremendous support for the legislation voiced across the political spectrum, some agencies that investigate civil infractions, as opposed to violations of criminal law, have sought to delay fixing the infirmities, and have even asked Congress for new and expanded powers. They seek the authority to force providers to search for and disclose users’ emails, documents and other content, rather than getting the information directly from users as they currently do. Congress should reject these efforts to expand the authority of these agencies, and should remain focused on fixing this broken statute. 

It is undeniable that ECPA no longer reflects users’ reasonable expectations of privacy and no longer comports with the Constitution. The Senate legislation, the ECPA Amendments Act of 2015, and its companion in the House, the Email Privacy Act, will ensure electronic communications content is treated in a manner commensurate with other papers and effects that are protected by the Fourth Amendment. 

The time for reform is now.

Positive Momentum for the Judicial Redress Act



US privacy and security laws make distinctions among US persons and non-US persons that are becoming obsolete in a world where communications primarily take place over a global medium: the Internet.

The Privacy Act of 1974 is one of those laws. It is an important law that creates rights - including judicial redress - against privacy harms that may arise from the US government’s collection and use of personal information. The Privacy Act, however, does not apply to non-US persons.

Last November, Google endorsed legislation that would extend the Privacy Act to non-US persons. Since then, Congressmen Sensenbrenner and Conyers have introduced legislation - the Judicial Redress Act - that would create a process to extend the Privacy Act to non-US persons. Senators Hatch and Murphy have introduced a companion bill in the Senate.

The Judicial Redress Act is an important first step toward establishing a framework whereby users have comparable privacy protections regardless of their citizenship.

Earlier today, the House Judiciary Committee unanimously passed this bill, which enjoys support from a broad array of Internet companies and trade associations. We commend the Judiciary Committee’s action today, and we encourage the House leadership to move swiftly to pass this important bill.

Privacy, security, surveillance: getting it right is important



Thank you for inviting me here today. It’s a great honor to be with you this afternoon: in a state with such a long history of invention--Siemens, Audi, BMW, Adidas; and in a city that has been such a wonderful partner to Google.

Just down the road, we signed our first major books digitization project with the Bavarian State Library. The village of Oberstaufen was our first Street View launch in Germany. Minister-President Seehofer was the first German politician to do a live interview on YouTube. Even the model locomotive in your Stone Hall represents a shared love of technology and excitement about the future.

Happily, it’s a future with more investment in Munich. Our new engineering center here will be home to several hundred employees--in addition to the three hundred who already live here. It happens to be located, appropriately enough, next to the Hacker Bridge--though, we don’t plan to hire any additional security.

Now I must admit to being a little bit nervous. US tech companies are front and center of the European political debate today: not always for the right reasons. And frankly some of the criticism is fair. As an industry we have sometimes been a little too high on our own success.

With that as my starting point, I wanted to talk about three important issues facing us all today:
  • First, government surveillance and the role technology companies have in the fight against crime and terrorism;
  • Second, the growing need to keep people’s information safe and secure online; and
  • And third, privacy in the digital age.

Government surveillance

One of the most basic duties of any government is to protect its citizens. It’s always been true that technology can be used for good, and bad. Since humankind discovered fire, there’s been arson. And today, the technologies we all use to find information or chat with loved ones, are also being co-opted by the criminal minority for their own purposes.

It’s why companies like Google have a responsibility to work with law enforcement. And we do--regularly providing account details, as well as the contents of private communications, like email, to the authorities as they investigate crime and terrorism.

For example, in the first six months of 2010, Google received almost 15,000 government requests for user data. By 2014, that number had risen to just under 35,000. We look carefully at every request and provide information in the majority of these cases--over 65 percent.

Why, you may ask, didn’t we comply in every case? Well, we have a duty to our users, as well. When people sign-up for an email account, they trust Google to keep that information private. So we need to be certain law enforcement requests are legitimate--not targeted at political activists or incredibly broad in their scope. In these cases we always push back. And we never let governments just help themselves to our users’ data. No government--including the US government--has backdoor access to Google or surveillance equipment on our networks.

This is why encryption is also important--because it requires governments to go through the proper legal channels. There’s simply no other way for them to get encrypted data, save hacking into our systems or by targeting individual users--issues I’ll touch on later. In fact, Gmail was the first email service to be encrypted by default, and we now encrypt Google Search, Maps, and Drive (our cloud-based storage service).

In the last few months, a number of governments have voiced their concern about the time it takes to process requests for user data when investigating crime, encryption and the storage of data, as well as the use of the Internet by terrorists. These concerns are entirely understandable, especially after last month’s horrific attacks in Paris and the barbaric murders of hostages by ISIS. So let me address each one in turn, starting with the time taken to process requests for user data.

When it’s a threat to life situation, Google is able to provide information to the authorities within hours--this is incredibly important given the increased terrorist threat many governments face today. But in most other situations, law enforcement requests--especially for private communications, such as Gmail--must be made through diplomatic channels, typically Mutual Legal Assistance Treaties, or MLATs for short. For example, if the US Government wants user information from a company based in Germany--say GMX or Xing--it works through the German government. It’s the same when the German government wants information from a US company, like Google. This creates checks and balances, preventing potential abuse.

That said, the MLAT process is too slow, too complicated and in need of reform. It’s why we’ve pressed to increase funding for the US Department of Justice so they can hire more people to process more requests, more quickly. And there’s good news here. For the first time, they’ve dedicated 90 staff and $20 million to process MLAT requests, and President Obama’s latest budget proposal asks for more.

When it comes to reform, it would save time if we moved beyond paper, fax machines and diplomatic pouches to web forms that are quick and easy to process. Europe is leading the way here. We now need the US to follow suit.

However, even with reform, some intergovernmental oversight will always be necessary. If government X wants information on its own citizens, that’s one thing. But when it’s asking for information about country Y’s citizens, surely that country should have a say in the decision as well. This process will always take some time.

Next: government concerns about encryption and the storage of data. Encryption helps prevent hackers from getting access to sensitive information like bank details--keeping the web safe and secure for everyone. It’s the same with the deletion of data. Snapchat, for example, automatically deletes photos and videos. It’s the ultimate right to be forgotten for the millions of young people using the service everyday. Given most people use the Internet for the reasons it was intended, we shouldn’t weaken security and privacy protections for the majority to deal with the minority who don’t.

Finally, terrorism. All of us have been horrified by ISIS and their use of the media to spread propaganda. At YouTube, the world’s most popular video sharing platform, we’re acutely aware of our responsibilities.

  • Last year alone we removed 14 million videos because they broke YouTube’s policies prohibiting gratuitous violence, incitement to violence and hate speech.
  • We automatically terminate the accounts of any terror group, and hand over the account information to the authorities.
  • We allow law enforcement, for example the UK Home Office, to flag videos containing terrorist content, which we review and remove as a priority. We hope to work with law enforcement in other countries on similar efforts.
  • And, we work with dozens of non-governmental organizations on counter speech--helping provide an alternative viewpoint to vulnerable young people.

Of course there is always more to be done and we welcome your ideas.

Over the last three years, first with Edward Snowden and now ISIS, we’ve seen the political debate about government access to information swing from one end of the spectrum to the other. Indeed, the race to encrypt was driven in large part by Snowden’s revelations, which uncovered some pretty outrageous behavior on the part of the US Government. The emergence of ISIS is now leading some governments to question encryption entirely, as well as to call for increased data retention. The solution, we believe, lies in a principled yet practical approach: one that restricts indiscriminate surveillance and supports valid law enforcement efforts while also protecting people’s privacy.

Privacy and security of personal information

Which brings me to my next subject: keeping people’s information safe and secure. In many ways, privacy and security are two sides of the same coin--if your data is not secure it’s not private, as last year’s celebrity hacks showed. While the target that time was Hollywood, it could just as easily have been you or me. So it’s not surprising that a recent Gallup poll showed people are more concerned with theft online than having their house broken into.

In the last four years, we’ve been able to cut in half the number of Google accounts that are hijacked. For example, we block suspicious attempts to log into accounts--perhaps because they come from an unusual device or location. If you’ve ever traveled abroad and got an email questioning a recent login, that’s Google working to keep you safe. And we also offer two-factor authentication so people are no longer rely only on their passwords for protection. Instead people confirm their identity not just with a password but also a code generated by their phone. If you’re at this conference and you’re not using two-factor authentication, you really should be--please talk to Wieland afterwards!

Now, we’re under a lot of scrutiny in Europe because of our size. But it is precisely our size that enables us to invest a lot in security, which helps our users as well as the wider web. For example, our Safe Browsing technology identifies sites that steal passwords or contain malware. If you’re using Chrome, we show very visible warnings--20 million per week--when you try to visit a malicious webpage. And because we make this data publicly available, Apple’s Safari and Mozilla’s Firefox browsers can use it as well. This helps protect over one billion people all around the world. We can also help move things forward in other ways: for instance, we now rank encrypted websites slightly higher in our search results, encouraging everyone to encrypt their services. And any company can take advantage of Google’s security expertise by using our corporate versions of Gmail and Drive. The fact that we employ 500 security and privacy experts means they don’t have to.

Corporate attacks are on the increase--and they highlight the interconnected nature of the web. The Sony hack, for example, not only exposed their own employees, but also the business plans of a high-profile tech CEO. In fact, the hack affected more than just egos--it hit the studio’s bottom line, too, when cinemas decided not to show The Interview. (Luckily, we were able to stand up for creative expression while helping Sony recoup some of that lost revenue by releasing the movie on YouTube and Google Play.)

These kinds of complexities are why security should be a team effort--companies working together, and governments working with companies. In 2010, Google disclosed that we had been subject to a significant cyberattack from China. At the time we were surprised that so few of the other companies targeted were willing to talk publicly. They were understandably afraid that doing so would frighten customers, provoke lawsuits, or worry investors. This is still the case for many companies today.

When individual companies keep attacks under wraps, it can make it harder for other companies to improve our defenses. It’s why we should all be to share best practices and the threats we see. We also believe that governments could be more forthcoming about the cybersecurity intelligence they have, so everyone can better protect themselves. This information often seeps out slowly, not least because it tends to get over-classified. We’re all stronger when security is a shared responsibility.

Privacy and trust

Finally, let me turn to privacy. I want to start by making clear Google hasn’t always got this right. It’s not just about the errors we have made--with products like Buzz or the mistaken collection of WiFi data--but about our attitude too. These have been lessons learned the hard way. But as our swift implementation of the Right to be Forgotten has shown, they are indeed lessons we have learned.

Now privacy means different things for different people, in different situations. For example, I may share photos only with my loved ones--others may feel comfortable posting them on the web. I may be happy for my friends to keep my shared photos forever--others may want them to disappear soon after. In the end, privacy is closely tied to our sense of personal identity: it’s not “one size fits all”. That’s why people want to be in control of the information they share and have real choices about the services they use. And that’s what we focus on at Google.

Keeping a record of what people search for can improve the quality of their results over time. But if you want to search without your queries being stored, turn off Search History. It’s really easy. Cookies help Google remember people’s preferences, like the language they use, for example. But if you want to browse the web and have your cookies disappear, use Chrome’s Incognito mode. If Google has someone’s location, we can give directions without them having to type in their start point each time. That’s useful for people like me with fat fingers on a mobile phone. But you can always turn that off too.

In addition, you can see all the information stored by Google and access all your privacy settings from one place, your Dashboard--which by the way was developed right here in Munich by our German engineers. People are using these tools and understand the choices they make. Ten million people check out their Account History settings each week--and make over 2.5 million changes. These are split evenly between people turning settings off and turning them on.

We also take pride in letting people leave Google easily. Data portability matters. So we’ve built a Takeout tool that enables you remove data stored by Google and put it elsewhere. We want people using our services because they love them, not because we hold their data hostage.

Now some of you are doubtless thinking: wait a minute--Google still collects all that information to serve me ads. Well actually no. Most of the data we collect is used to provide and improve our services. For example we store hundreds of billions of emails because hundreds of millions of people globally want unlimited storage. Gmail has become their digital filing cabinet. In fact, our Google search ads--the core of our business--actually require very little personal information. If you type flowers into Google search--the chances are you want … well … flowers! It doesn’t take a rocket scientist or a ton of data to work that one out.

Of course it is true that most of our services today are supported by advertising. But we view that as a positive because ads enable us to offer our products for free to everyone. Without ads, the poorest would not have access to the same search results, the same maps, the same translation tools, the same email service as the richest people on earth. And it’s important to remember that even though we are in the advertising business, Google does not sell your information--nor do we share it without your permission except in very limited circumstances, like government requests for data.

Now some people argue that Google’s collection of data is no different than government surveillance. “Google has the data so why shouldn’t we” is an argument used by many intelligence services in the press. But we believe there is a significant difference. Government surveillance uses data that was collected for an entirely separate purpose; it’s conducted in secret; its targets are unaware their data is being collected, and they are unable to stop or control it. Google, by contrast, collects data to provide and improve our products. And we give our users the ability to control or stop the collection of their data, or leave entirely.

The potential of science and technology

I was reading about the history of this building. I was amazed to see how long the project took: King Maximilian first started construction in 1857. It wasn’t completed until 1874, 17 years later. They actually had to change the style of architecture, mid-build, to keep up with the times.

In those 17 years, though, we saw the invention of the gasoline engine, the sewing machine, dynamite, and the typewriter. Darwin wrote the Origin of Species, and Mendeleev created the periodic table. That’s a pretty good 17 years. Technology was moving fast--probably faster than people wanted it to.

Similarly, just 17 years ago, you couldn’t instantly share photos of your children with friends… or talk to anyone, wherever they are in the world. The idea of not having a landline telephone seemed absurd.

The point is, just as in the 1850s, technology is moving fast. It’s changing the way we live. It’s raising new questions all the time. And, just as in the past, it’ll take many of us coming together to come up with the right answers. We look forward to working with all of you on that. Because this building was constructed from a profound optimism about the potential for science and technology to improve lives. That optimism is in your history. It’s in your DNA. And it’s an optimism that Google shares with you.

Danke.