Tag Archives: hardware

New ways to experience Made by Google products

Today, we told you about what’s coming in our latest family of #MadebyGoogle products. But what's a line-up of shiny new products without a plethora of ways for the world to experience, try and buy them? As we continue to build products for everyone, we’re exploring helpful new ways to get our products to everyone.

The Google Hardware Store pop-ups

Starting on October 18, New Yorkers and Chicagoans can try out and buy our new products at a pop-up shop in each city—the only place you can shop Google products in a fully Google-made experiential space. Our pop ups will be open October 18 through December 31, so if you’re in Chicago (Bucktown at 1704 N. Damen) or NYC (SoHo at 131 Green Street), come visit us.

The Google Store and Enjoy

You can now pre-order and shop all of our products via the online Google Store, including the Pixel 3 / 3XL (that works with all major carriers). And as of October 18, folks in the Bay Area can buy the new Pixel 3 / 3XL and get it delivered as soon as three hours and expertly set up via the Enjoy service. You can also get the Pixel 2XL, Pixelbook and Google Home Max via Enjoy delivery now. We’re bringing the Google Store to you!

Google Store Retail 2018.jpg

Google Store + Enjoy bring the expertise of the Google Store to you.

b8ta

Made by Google products are part of an interactive shopping experience in five b8ta stores across the country, including Austin, Corte Madera, Houston, San Francisco, Tysons Corner, and will be available in two new b8ta stores in Short Hills, NJ and Scottsdale, AZ opening later this year. As a part of the unique in-store experience, customers can test out and shop Google’s Home products in interactive home-like vignettes. Visit a store and demo products with one of b8ta’s experts.
b8ta

Made by Google products are now part of a new interactive shopping experience in b8ta.

goop

goop is joining forces with Made by Google products to offer the Google Home smart speaker family across the U.S. in permanent goop Lab stores and goop GIFT pop-ups this holiday season. Abroad, customers can shop at the goop London pop-up which opened this past September. Keep an eye out for more information from goop + Made by Google later this month.

And as for the future...

Google Home Mini and Wing

It’s a bird, it’s a plane. It’s Google Home Mini being delivered by drone! You read that right—along with Wing (an Alphabet company), we’re pushing the boundaries of conventional delivery. As a part of a small, localized test, Google Home Minis were recently dropped off at customers’ homes only 10 minutes after ordering. Although not a reality today, imagine the possibilities in years to come… 

Google hardware. Designed to work better together.

This year marks Google’s 20th anniversary—for two decades we’ve been working toward our mission to organize the world’s information and make it universally accessible and useful for everybody. Delivering information has always been in our DNA. It’s why we exist. From searching the world, to translating it, to getting a great photo of it, when we see an opportunity to help people, we’ll go the extra mile. We love working on really hard problems that make life easier for people, in big and small ways.

There’s a clear line from the technology we were working on 20 years ago to the technology we’re developing today—and the big breakthroughs come at the intersection of AI, software and hardware, working together. This approach is what makes the Google hardware experience so unique, and it unlocks all kinds of helpful benefits. When we think about artificial intelligence in the context of consumer hardware, it isn’t artificial at all—it’s helping you get real things done, every day. A shorter route to work. A gorgeous vacation photo. A faster email response. 

So today, we’re introducing our third-generation family of consumer hardware products, all made by Google:

  • For life on the go, we’re introducing the Pixel 3 and Pixel 3 XL—designed from the inside out to be the smartest, most helpful device in your life. It’s a phone that can answer itself, a camera that won’t miss a shot, and a helpful Assistant even while it’s charging.

  • For life at work and at play, we’re bringing the power and productivity of a desktop to a gorgeous tablet called Pixel Slate. This Chrome OS device is both a powerful workstation at the office, and a home theater you can hold in your hands.

  • And for life at home we designed Google Home Hub, which lets you hear and see the info you need, and manage your connected home from a single screen. With its radically helpful smart display, Google Home Hub lays the foundation for a truly thoughtful home.

Please visit our updated online store to see the full details, pricing and availability

The new Google devices fit perfectly with the rest of our family of products, including Nest, which joined the Google hardware family at the beginning of this year. Together with Nest, we’re pursuing our shared vision of a thoughtful home that isn’t just smart, it’s also helpful and simple enough for everyone to set up and use. It's technology designed for the way you live.

Ivy Ross + Hardware Design

Our goal with these new products, as always, is to create something that serves a purpose in people’s lives—products that are so useful they make people wonder how they ever lived without them. The simple yet beautiful design of these new devices continues to bring the smarts of the technology to the forefront, while providing people with a bold piece of hardware.

Our guiding principle

Google's guiding principle is the same as it’s been for 20 years—to respect our users and put them first. We feel a deep responsibility to provide you with a helpful, personal Google experience, and that guides the work we do in three very specific ways:

  • First, we want to provide you with an experience that is unique to you. Just like Google is organizing the world’s information, the combination of AI, software and hardware can organize your information—and help out with the things you want to get done. The Google Assistant is the best expression of this, and it’s always available when, where, and however you need it.

  • Second, we’re committed to the security of our users. We need to offer simple, powerful ways to safeguard your devices. We’ve integrated Titan™ Security, the system we built for Google, into our new mobile devices. Titan™ Security protects your most sensitive on-device data by securing your lock screen and strengthening disk encryption.

  • Third, we want to make sure you’re in control of your digital wellbeing. From our research, 72 percent of our users are concerned about the amount of time people spend using tech. We take this very seriously and have developed new tools that make people’s lives easier and cut back on distractions.

A few new things made by Google

With these Made by Google devices, our goal is to provide radically helpful solutions. While it’s early in the journey, we’re taking an end-to-end approach to consumer technology that merges our most innovative AI with intuitive software and powerful hardware. Ultimately, we want to help you do more with your days while doing less with your tech—so you can focus on what matters most.

Announcing Cirq: An Open Source Framework for NISQ Algorithms



Over the past few years, quantum computing has experienced a growth not only in the construction of quantum hardware, but also in the development of quantum algorithms. With the availability of Noisy Intermediate Scale Quantum (NISQ) computers (devices with ~50 - 100 qubits and high fidelity quantum gates), the development of algorithms to understand the power of these machines is of increasing importance. However, a common problem when designing a quantum algorithm on a NISQ processor is how to take full advantage of these limited quantum devices—using resources to solve the hardest part of the problem rather than on overheads from poor mappings between the algorithm and hardware. Furthermore some quantum processors have complex geometric constraints and other nuances, and ignoring these will either result in faulty quantum computation, or a computation that is modified and sub-optimal.*

Today at the First International Workshop on Quantum Software and Quantum Machine Learning (QSML), the Google AI Quantum team announced the public alpha of Cirq, an open source framework for NISQ computers. Cirq is focused on near-term questions and helping researchers understand whether NISQ quantum computers are capable of solving computational problems of practical importance. Cirq is licensed under Apache 2, and is free to be modified or embedded in any commercial or open source package.
Once installed, Cirq enables researchers to write quantum algorithms for specific quantum processors. Cirq gives users fine tuned control over quantum circuits, specifying gate behavior using native gates, placing these gates appropriately on the device, and scheduling the timing of these gates within the constraints of the quantum hardware. Data structures are optimized for writing and compiling these quantum circuits to allow users to get the most out of NISQ architectures. Cirq supports running these algorithms locally on a simulator, and is designed to easily integrate with future quantum hardware or larger simulators via the cloud.
We are also announcing the release of OpenFermion-Cirq, an example of a Cirq based application enabling near-term algorithms. OpenFermion is a platform for developing quantum algorithms for chemistry problems, and OpenFermion-Cirq is an open source library which compiles quantum simulation algorithms to Cirq. The new library uses the latest advances in building low depth quantum algorithms for quantum chemistry problems to enable users to go from the details of a chemical problem to highly optimized quantum circuits customized to run on particular hardware. For example, this library can be used to easily build quantum variational algorithms for simulating properties of molecules and complex materials.

Quantum computing will require strong cross-industry and academic collaborations if it is going to realize its full potential. In building Cirq, we worked with early testers to gain feedback and insight into algorithm design for NISQ computers. Below are some examples of Cirq work resulting from these early adopters:
To learn more about how Cirq is helping enable NISQ algorithms, please visit the links above where many of the adopters have provided example source code for their implementations.

Today, the Google AI Quantum team is using Cirq to create circuits that run on Google’s Bristlecone processor. In the future, we plan to make this processor available in the cloud, and Cirq will be the interface in which users write programs for this processor. In the meantime, we hope Cirq will improve the productivity of NISQ algorithm developers and researchers everywhere. Please check out the GitHub repositories for Cirq and OpenFermion-Cirq — pull requests welcome!

Acknowledgements
We would like to thank Craig Gidney for leading the development of Cirq, Ryan Babbush and Kevin Sung for building OpenFermion-Cirq and a whole host of code contributors to both frameworks.


* An analogous situation is how early classical programmers needed to run complex programs in very small memory spaces by paying careful attention to the lowest level details of the hardware.

Source: Google AI Blog


Google Wifi’s Network Check now tests multiple device connections

Wi-Fi is a necessity for tons of connected devices in our homes. And when it isn’t working the way you expect, it can be a bit of a black box to troubleshoot. Google Wifi’s Network Check technology has always let you measure the speed of your internet connection and the quality of the network connection between your Google Wifi access points (if you have more than one). But what about that new smart TV in the bedroom that’s constantly buffering? Or your outdoor security camera with a flaky connection?


Starting today, we’re rolling out a new feature to Google Wifi that lets you measure how each individual device is performing on your Wi-Fi network. Knowing Wi-Fi coverage is poor in an area of your home can help you pinpoint the exact bottleneck when you notice a connectivity slowdown. Then, you’ll know to move your Google Wifi point closer to that device or even move the device itself for a stronger connection.

Network Check update

In the past month alone, we saw an average of 18 connected devices on each Google Wifi network, globally. With so many devices on your network, we want to make sure you have a way to know each device has the best connection possible, and that your home Wi-Fi is doing its job.


This update to our Network Check technology will be available in the coming weeks to all Google Wifi users around the world—just open the Google Wifi app to get started. Dead zones be gone!

Three, two, one: New ways to control Google Pixel Buds

Google Pixel Buds let you do a lot with just a quick touch. When you use Pixel Buds with your Pixel or other Android device with the Assistant, simply touch and hold the right earbud to ask for your favorite playlist, make a call, send a message or get walking directions to dinner. And, it allows you to control your audio too—just swipe forward or backward to control volume and tap to play or pause your music.


We’re adding three highly-requested features with the latest update that is beginning to roll out today. It’s as easy as 3, 2, 1.


Triple tap: On and off with touch.Pixel Buds can now be manually turned on or off by triple-tapping on the right earbud.

pixelbuds_3taps_detail.gif

Double tap: Next track.Until now, double tapping let you hear notifications as they arrived on your phone. Now you can set double-tap to skip to the next track. To enable this, go to the Pixel Buds’ settings within the Google Assistant app on your phone and enable double-tap to skip to the next track. You can continue to use a Google Assistant voice command to skip tracks, even if you assign two taps to the “next” track feature.

pixelbuds_2taps_detail.gif

One easy switch: Pairing devices made easy. To switch your Pixel Buds connection between your phone and computer (or any device you’ve previously paired), select your Pixel Buds from the BluetoothTM menu of the desired device. Your Pixel Buds will disconnect from the device you were using and connect to the new one.


These updates are starting to roll out today and will be available to everyone by early next week. Go to g.co/pixelbuds to learn more.

Automatic Photography with Google Clips



To me, photography is the simultaneous recognition, in a fraction of a second, of the significance of an event as well as of a precise organization of forms which give that event its proper expression.
Henri Cartier-Bresson

The last few years have witnessed a Cambrian-like explosion in AI, with deep learning methods enabling computer vision algorithms to recognize many of the elements of a good photograph: people, smiles, pets, sunsets, famous landmarks and more. But, despite these recent advancements, automatic photography remains a very challenging problem. Can a camera capture a great moment automatically?

Recently, we released Google Clips, a new, hands-free camera that automatically captures interesting moments in your life. We designed Google Clips around three important principles:
  • We wanted all computations to be performed on-device. In addition to extending battery life and reducing latency, on-device processing means that none of your clips leave the device unless you decide to save or share them, which is a key privacy control.
  • We wanted the device to capture short videos, rather than single photographs. Moments with motion can be more poignant and true-to-memory, and it is often easier to shoot a video around a compelling moment than it is to capture a perfect, single instant in time.
  • We wanted to focus on capturing candid moments of people and pets, rather than the more abstract and subjective problem of capturing artistic images. That is, we did not attempt to teach Clips to think about composition, color balance, light, etc.; instead, Clips focuses on selecting ranges of time containing people and animals doing interesting activities.
Learning to Recognize Great Moments
How could we train an algorithm to recognize interesting moments? As with most machine learning problems, we started with a dataset. We created a dataset of thousands of videos in diverse scenarios where we imagined Clips being used. We also made sure our dataset represented a wide range of ethnicities, genders, and ages. We then hired expert photographers and video editors to pore over this footage to select the best short video segments. These early curations gave us examples for our algorithms to emulate. However, it is challenging to train an algorithm solely from the subjective selection of the curators — one needs a smooth gradient of labels to teach an algorithm to recognize the quality of content, ranging from "perfect" to "terrible."

To address this problem, we took a second data-collection approach, with the goal of creating a continuous quality score across the length of a video. We split each video into short segments (similar to the content Clips captures), randomly selected pairs of segments, and asked human raters to select the one they prefer.
We took this pairwise comparison approach, instead of having raters score videos directly, because it is much easier to choose the better of a pair than it is to specify a number. We found that raters were very consistent in pairwise comparisons, and less so when scoring directly. Given enough pairwise comparisons for any given video, we were able to compute a continuous quality score over the entire length. In this process, we collected over 50,000,000 pairwise comparisons on clips sampled from over 1,000 videos. That’s a lot of human effort!
Training a Clips Quality Model
Given this quality score training data, our next step was to train a neural network model to estimate the quality of any photograph captured by the device. We started with the basic assumption that knowing what’s in the photograph (e.g., people, dogs, trees, etc.) will help determine “interestingness”. If this assumption is correct, we could learn a function that uses the recognized content of the photograph to predict its quality score derived above from human comparisons.

To identify content labels in our training data, we leveraged the same Google machine learning technology that powers Google image search and Google Photos, which can recognize over 27,000 different labels describing objects, concepts, and actions. We certainly didn’t need all these labels, nor could we compute them all on device, so our expert photographers selected the few hundred labels they felt were most relevant to predicting the “interestingness” of a photograph. We also added the labels most highly correlated with the rater-derived quality scores.

Once we had this subset of labels, we then needed to design a compact, efficient model that could predict them for any given image, on-device, within strict power and thermal limits. This presented a challenge, as the deep learning techniques behind computer vision typically require strong desktop GPUs, and algorithms adapted to run on mobile devices lag far behind state-of-the-art techniques on desktop or cloud. To train this on-device model, we first took a large set of photographs and again used Google’s powerful, server-based recognition models to predict label confidence for each of the “interesting” labels described above. We then trained a MobileNet Image Content Model (ICM) to mimic the predictions of the server-based model. This compact model is capable of recognizing the most interesting elements of photographs, while ignoring non-relevant content.

The final step was to predict a single quality score for an input photograph from its content predicted by the ICM, using the 50M pairwise comparisons as training data. This score is computed with a piecewise linear regression model that combines the output of the ICM into a frame quality score. This frame quality score is averaged across the video segment to form a moment score. Given a pairwise comparison, our model should compute a moment score that is higher for the video segment preferred by humans. The model is trained so that its predictions match the human pairwise comparisons as well as possible.
Diagram of the training process for generating frame quality scores. Piecewise linear regression maps from an ICM embedding to a score which, when averaged across a video segment, yields a moment score. The moment score of the preferred segment should be higher.
This process allowed us to train a model that combines the power of Google image recognition technology with the wisdom of human raters–represented by 50 million opinions on what makes interesting content!

While this data-driven score does a great job of identifying interesting (and non-interesting) moments, we also added some bonuses to our overall quality score for phenomena that we know we want Clips to capture, including faces (especially recurring and thus “familiar” ones), smiles, and pets. In our most recent release, we added bonuses for certain activities that customers particularly want to capture, such as hugs, kisses, jumping, and dancing. Recognizing these activities required extensions to the ICM model.

Shot Control
Given this powerful model for predicting the “interestingness” of a scene, the Clips camera can decide which moments to capture in real-time. Its shot control algorithms follow three main principles:
  1. Respect Power & Thermals: We want the Clips battery to last roughly three hours, and we don’t want the device to overheat — the device can’t run at full throttle all the time. Clips spends much of its time in a low-power mode that captures one frame per second. If the quality of that frame exceeds a threshold set by how much Clips has recently shot, it moves into a high-power mode, capturing at 15 fps. Clips then saves a clip at the first quality peak encountered.
  2. Avoid Redundancy: We don’t want Clips to capture all of its moments at once, and ignore the rest of a session. Our algorithms therefore cluster moments into visually similar groups, and limit the number of clips in each cluster.
  3. The Benefit of Hindsight: It’s much easier to determine which clips are the best when you can examine the totality of clips captured. Clips therefore captures more moments than it intends to show to the user. When clips are ready to be transferred to the phone, the Clips device takes a second look at what it has shot, and only transfers the best and least redundant content.
Machine Learning Fairness
In addition to making sure our video dataset represented a diverse population, we also constructed several other tests to assess the fairness of our algorithms. We created controlled datasets by sampling subjects from different genders and skin tones in a balanced manner, while keeping variables like content type, duration, and environmental conditions constant. We then used this dataset to test that our algorithms had similar performance when applied to different groups. To help detect any regressions in fairness that might occur as we improved our moment quality models, we added fairness tests to our automated system. Any change to our software was run across this battery of tests, and was required to pass. It is important to note that this methodology can’t guarantee fairness, as we can’t test for every possible scenario and outcome. However, we believe that these steps are an important part of our long-term work to achieve fairness in ML algorithms.

Conclusion
Most machine learning algorithms are designed to estimate objective qualities – a photo contains a cat, or it doesn’t. In our case, we aim to capture a more elusive and subjective quality – whether a personal photograph is interesting, or not. We therefore combine the objective, semantic content of photographs with subjective human preferences to build the AI behind Google Clips. Also, Clips is designed to work alongside a person, rather than autonomously; to get good results, a person still needs to be conscious of framing, and make sure the camera is pointed at interesting content. We’re happy with how well Google Clips performs, and are excited to continue to improve our algorithms to capture that “perfect” moment!

Acknowledgements
The algorithms described here were conceived and implemented by a large group of Google engineers, research scientists, and others. Figures were made by Lior Shapira. Thanks to Lior and Juston Payne for video content.

Source: Google AI Blog


Jump for joy: Google Clips captures life’s little moments

Two months ago, we launched Google Clips, a lightweight, hands-free camera that captures life’s beautiful and spontaneous moments with the help of machine learning and motion detection. Since then we’ve seen some great clips from moms, dads, and pet owners who have captured candid moments like this, this and this.


When it comes to kids and pets, you never know which moments you’ll want to capture. It’s not just about them smiling, looking at the camera, or posing on request (near-impossible with kids and pets who don’t want to sit still!). You may want to get your daughter jumping up and down in excitement, or your son kissing your cat. It’s all about the little moments and emotions that you can't stage or coordinate ahead of time.


To help capture these moments, we’re adding improved functionality to Clips so that it’s better at recognizing hugs, kisses, jumps and dance moves. All you need to do is find the best vantage point as you go about your day, and turn Clips on.


We’ve also heard from families using Clips that they want to be able to connect the device with more than one phone, so we’re adding family pairing this month, so that more than one family member can connect their phone to the Clips device to view and share content.


Clips’s improved intelligence can help you capture more of the candid and fleeting moments that happen in between those posed frames we are all so familiar with.

If you want to learn more about how Clips knows what makes a moment worth capturing, you can check out all the details on the Research blog.


Look for our May update this week (just in time for Mother’s Day!) on your Clips app and try out the improved functionality. For those of you who are looking to try it out, you can get $50 off in our Mother’s Day promotion.

Google Home and Google Home Mini launches in India

https://lh6.googleusercontent.com/oJQ6oer9k-oKSnaW3KR5MwCfRndlnGcAU-FN0yA9kkYKRtwSGa3fSclA27uJ7h5rPzT1yEXW2U7wCkdYfJKj4vMZM5IPVKYOhWAHcCZ5n1qevNxrtJTE_cnGHcXNL4zsOhY9ElbG
Bringing together the best of Google’s AI, software and hardware, now with a desi twist


Whether you’re getting the kids ready for school, doing a batch of laundry, or answering the doorbell for the morning vendors, Indian homes are busy ones. From catching that Bollywood blockbuster on your smart TV, to whipping up a quick Chole Bhature, to sinking into soulful Sufi tunes at the end of a tiring day, you can now get hands-free help.


Beginning today, Indian users can welcome in their lives Google Home -- our voice-activated speaker powered by the Google Assistant. With a simple “Ok, Google” or “Hey Google”, you can get answers, turn up the music, manage everyday tasks or even control smart devices around your home.


Google Home understands Indian accents, and will respond to you with uniquely Indian contexts. What’s great about the Google Assistant is that it’s the same across all your devices, so that it works seamlessly for you wherever you need a helping hand. You can for instance ask it for the quickest route to office, then tell it to push the directions to Google Maps on your smartphone, and you’re ready to navigate as you head out.


Designed to fit seamlessly into your home
We didn’t want Google Home to feel like a gadget, and took inspiration from consumer products that are commonly found in homes, like wine glasses, candles, and even donuts for Mini.


The top surface has LEDs that provide visual feedback when Google Home recognizes “Hey Google”, so you know when it is listening. In those rare moments when voice won’t do, the top surface is also a capacitive touch panel. You can simply use your finger to pause the music or adjust the volume.
Google Home was designed with two microphones to enable accurate far-field voice recognition. The microphone system uses a technique called neural beam forming. We’ve simulated hundreds of thousands of noisy environments and applied machine learning to recognize patterns that allow us to filter and separate speech from noise. This allows us to deliver best-in-class voice recognition and minimize error rates -- even from across the room. Home will be available in India in the Chalk color variant.
Google Home Mini is sleek and smooth, with no corners or edges. And it's small enough to easily place anywhere in your home. It’s almost entirely enclosed in custom fabric. We created this material from scratch, right down to the yarn. It’s durable and soft, but also transparent enough to let through both light and sound. And it is available in Chalk and Charcoal, with Coral coming soon. The four LED lights underneath the fabric that light up to show you when it hears you or when it’s thinking. Mini has far-field mics so it can hear you even when there’s music playing or loud noise in the background: its circular design it can project 360-degree sound, with just one speaker.


These devices join the Made by Google family of hardware products in India, and will be available for purchase online exclusively on Flipkart, and in over 750 retail stores across the country including Reliance Digital, Croma, Bajaj Electronics, Vijay Sales, Sangeetha, and Poorvika.


Tap into the power of Google with your Assistant
Need answers to a problem? Ask questions, translate phrases, run simple maths calculations and look up the meaning of a word. Too busy to stay on top of the news? Ask and you shall receive the latest stories from sources such as Times of India, NDTV, Dainik Bhaskar, India Today, Aaj Tak and more. Need a helping hand in the kitchen? Find ingredient substitutes, pull up nutritional information and unit conversions without having to wash your atta-covered fingers.


Google Home is truly ‘desi’
With a distinctly Indian voice, your Assistant on Google Home speaks and understands your language. Ask it “Hey Google, how desi are you?”, put its cricket knowledge to the test with “Hey Google, what is a silly point?”, tell it to “Play songs from the movie Satte Pe Satta”, or even get step-by-step cooking instructions in the kitchen with, “Hey Google, give me a recipe for Dum Biryani”.


Get personalised help for your everyday tasks
The Google Assistant on Google Home has been designed to help you get more stuff done when you have your hands full. With your permission, it will help with things like your commute, your daily schedule and more. And the best part? Up to six people can connect their account to one Google Home, so if you ask your Assistant to tell you about your day, it can distinguish your voice from other people in your family, and give you personalised answers. Just ask “Hey Google, tell me about my day” or say, “Hey Google, how long will it take to get to work?” and you’ll get up to speed on everything you need to know. It can wake you up in the morning (or let you snooze), set a timer while you’re baking, and so more.


Turn up those tunes
Find the right rhythm for every occasion, whether you’re getting into the zone with sunrise yoga, hosting a dinner party, or burning off calories dancing with your little ones. You can play songs, playlists, artists, and albums from your favourite music subscription services like Google Play Music (with a six-month subscription, on us), along with offers from Saavn and Gaana*. You can also pair Google Home or Home Mini with your favorite Bluetooth speaker and set it to be the default output for all your music.


Control your smart home
Google Home can help you keep track of everything going on in your home--you can control your lights, switches and more, using compatible smart devices from brands like Philips Hue, D-Link and TP-Link. Just ask your Google Home, and your Assistant will turn off the kitchen light. If you have a Chromecast, you can also use voice commands to play Netflix, or YouTube on your TV and binge watch your favorite shows. Enjoy multi-room by grouping Google Home devices together (with Chromecast Audio, Chromecast built-in and Bluetooth speakers) to listen to the same song in every room.


A speaker for any occasion
Whether you’re hosting a dinner or a solo dance party, Google Home delivers crystal-clear sound and creates an enjoyable listening experience. Plus, we designed Home to fit stylishly into any room. And you have the option to customize the base with different colors to reflect your home’s style.


With Google Home, we’re working with our partners to bring you many great launch offers: when buying Google Home or Google Home Mini on Flipkart you get a free JioFi router along with special offers on exchange and streaming music subscriptions; when buying a Google Home at Reliance Digital or MyJio stores you get a free JioFi router with 100GB of high-speed 4G data (worth Rs 2,499)**, and at select Philips Hue and Croma outlets you get a Philips Hue + Google Home Mini at a special bundled price. Also ACT Fibernet retail customers subscribing to 12-month advance rental plans of 90MBPS and above, will receive a Google Home Mini. And above all, users get 10 percent cashback when purchasing using HDFC Bank credit cards***.


Google Home and Google Home Mini will be priced at Rs 9,999, and Rs 4,499 respectively.


It’s just the beginning...
Your Assistant on Google Home will continue to get better over time as we add more features (look out for Hindi support coming later this year!) And Google Home is open to third-party apps for the Assistant, so expect even more of your favourite services and content.

Posted by Rishi Chandra, VP,  Product Management, Google Home


Note:
*Both available from April 10 to October 31, 2018, for all Google Home and Home Mini users in India
**Offer valid until 30th April 2018
***Cashback limited to 10% of MRP

Coding your way into cinemas

This is a guest post from apertus° and TimVideos.us, open source organizations that participated in Google Summer of Code last year and are back for 2018!

The apertus° AXIOM project is bringing the world’s first open hardware/free software digital motion picture production camera to life. The project has a rich history, exercises a steadfast adherence to the open source ethos, and all aspects of development have always revolved around supporting and utilising free technologies. The challenge of building a sophisticated digital cinema camera was perfect for Google Summer of Code 2017. But let’s start at the beginning: why did the team behind the project embark on their journey?

Modern Cinematography

For over a century film was dominated by analog cameras and celluloid, but in the late 2000’s things changed radically with the adoption of digital projection in cinemas. It was a natural next step, then, for filmmakers to shoot and produce films digitally. Certain applications in science, large format photography and fine arts still hold onto 35mm film processing, but the reduction in costs and improved workflows associated with digital image capture have revolutionised how we create and consume visual content.

The DSLR revolution

Photo by Matthew Pearce
licensed CC SA 2.0.
Filmmaking has long been considered an expensive discipline accessible only to a select few. This all changed with the adoption of movie recording capabilities in digital single-lens reflex (DSLR) cameras. For multinational corporations this “new” feature was a relatively straightforward addition to existing models as most compact digital photo cameras could already record video clips. This was the first time that a large diameter image sensor, a vital component for creating the typical shallow depth of field we consider cinematic, appeared in consumer cameras. In recent times, user groups have stepped up to contribute to the DSLR revolution first-hand, including groups like the Magic Lantern community.

Magic Lantern

Photo by Dave Dugdale licensed CC BY-SA 2.0.
Magic Lantern is a free and open source software add-on that runs from a camera’s SD/CF card. It adds a host of new features to Canon’s DSLRs that weren't included from the factory, such as allowing users to record high-dynamic range (HDR) video or 14-bit uncompressed RAW video. It’s a community project and many filmmakers simply wouldn’t have bought a Canon camera if it weren’t for the features that Magic Lantern pioneered. Because installing Magic Lantern doesn’t replace the stock Canon firmware or modify the read-only memory (ROM) but runs alongside it, it is both easy to remove and carries little risk. Originally developed for filmmaking, Magic Lantern’s feature base has expanded to include tools useful for still photography as well.

Starting the revolution for real 

Of course, Magic Lantern has been held back by the underlying proprietary hardware routines on existing camera models. So, in 2014 a team of developers and filmmakers around the apertus° project joined forces with the Magic Lantern team to lay the foundation for a totally independent, open hardware, free software, digital cinema camera. They ran a successful crowdfunding campaign for initial development, and they completed hardware development of the first developer kits in 2016. Unlike traditional cameras, the AXIOM is designed to be completely modular, and so continuously evolve, thereby preventing it from ever becoming obsolete. How the camera evolves is determined by its user community, with its design files and source code freely available and users encouraged to duplicate, modify and redistribute anything and everything related to the camera.

While the camera is primarily for use in motion picture production, there are many suitable applications where AXIOM can be useful. Individuals in science, astronomy, medicine, aerial mapping, industrial automation, and those who record events or talks at conferences have expressed interest in the camera. A modular and open source device for digital imaging allows users to build a system that meets their unique requirements. One such company for instance, Mavrx Inc, who use aerial imagery to provide actionable insight for the agriculture industry, used the camera because it enabled them to not only process the data more efficiently than comparable camera equivalents, but also to re-configure its form factor so that it could be installed alongside existing equipment configurations.

Google Summer of Code 2017

Continuing their journey, apertus° participated in Google Summer of Code for the first time in 2017. They received about 30 applications from interested students, from which they needed to select three. Projects ranged from field programmable gate array (FPGA) centered video applications to creating Linux kernel drivers for specific camera hardware. Similarly TimVideos.us, an open hardware project for live event streaming and conference recording, is working on FPGA projects around video interfaces and processing.

After some preliminary work, the students came to grips with the camera’s operating processes and all three dove in enthusiastically. One student failed the first evaluation and another failed the second, but one student successfully completed their work.

That student, Vlad Niculescu, worked on defining control loops for a voltage controller using VHSIC Hardware Description Language (VHDL) for a potential future AXIOM Beta Power Board, an FPGA-driven smart switching regulator for increasing the power efficiency and improving flexibility around voltage regulation.
Left: The printed circuit board (PCB) (printed circuit board) for testing the switching regulator FPGA logic. Right: After final improvements the fluctuation ripple in the voltages was reduced to around 30mV at 2V target voltage.
Vlad had this to say about his experience:

“The knowledge I acquired during my work with this project and apertus° was very satisfying. Besides the electrical skills gained I also managed to obtain other, important universal skills. One of the things I learned was that the key to solving complex problems can often be found by dividing them into small blocks so that the greater whole can be easily observed by others. Writing better code and managing the stages of building a complex project have become lessons that will no doubt become valuable in the future. I will always be grateful to my mentor as he had the patience to explain everything carefully and teach me new things step by step, and also to apertus° and Google’s Summer of Code program, without which I may not have gained the experience of working on a project like this one.”

We are grateful for Vlad’s work and congratulate him for successfully completing the program. If you find open hardware and video production interesting, we encourage you to reach out and join the community–both apertus° and TimVideos.us are back for Google Summer of Code 2018.

By Sebastian Pichelhofer, apertus°, and Tim 'mithro' Ansell, TimVideos.us

A Preview of Bristlecone, Google’s New Quantum Processor



The goal of the Google Quantum AI lab is to build a quantum computer that can be used to solve real-world problems. Our strategy is to explore near-term applications using systems that are forward compatible to a large-scale universal error-corrected quantum computer. In order for a quantum processor to be able to run algorithms beyond the scope of classical simulations, it requires not only a large number of qubits. Crucially, the processor must also have low error rates on readout and logical operations, such as single and two-qubit gates.

Today we presented Bristlecone, our new quantum processor, at the annual American Physical Society meeting in Los Angeles. The purpose of this gate-based superconducting system is to provide a testbed for research into system error rates and scalability of our qubit technology, as well as applications in quantum simulation, optimization, and machine learning.
Bristlecone is Google’s newest quantum processor (left). On the right is a cartoon of the device: each “X” represents a qubit, with nearest neighbor connectivity.
The guiding design principle for this device is to preserve the underlying physics of our previous 9-qubit linear array technology1, 2, which demonstrated low error rates for readout (1%), single-qubit gates (0.1%) and most importantly two-qubit gates (0.6%) as our best result. This device uses the same scheme for coupling, control, and readout, but is scaled to a square array of 72 qubits. We chose a device of this size to be able to demonstrate quantum supremacy in the future, investigate first and second order error-correction using the surface code, and to facilitate quantum algorithm development on actual hardware.
2D conceptual chart showing the relationship between error rate and number of qubits. The intended research direction of the Quantum AI Lab is shown in red, where we hope to access near-term applications on the road to building an error corrected quantum computer.
Before investigating specific applications, it is important to quantify a quantum processor’s capabilities. Our theory team has developed a benchmarking tool for exactly this task. We can assign a single system error by applying random quantum circuits to the device and checking the sampled output distribution against a classical simulation. If a quantum processor can be operated with low enough error, it would be able to outperform a classical supercomputer on a well-defined computer science problem, an achievement known as quantum supremacy. These random circuits must be large in both number of qubits as well as computational length (depth). Although no one has achieved this goal yet, we calculate quantum supremacy can be comfortably demonstrated with 49 qubits, a circuit depth exceeding 40, and a two-qubit error below 0.5%. We believe the experimental demonstration of a quantum processor outperforming a supercomputer would be a watershed moment for our field, and remains one of our key objectives.
A Bristlecone chip being installed by Research Scientist Marissa Giustina at the Quantum AI Lab in Santa Barbara
We are looking to achieve similar performance to the best error rates of the 9-qubit device, but now across across all 72 qubits of Bristlecone. We believe Bristlecone would then be a compelling proof-of-principle for building larger scale quantum computers. Operating a device such as Bristlecone at low system error requires harmony between a full stack of technology ranging from software and control electronics to the processor itself. Getting this right requires careful systems engineering over several iterations.

We are cautiously optimistic that quantum supremacy can be achieved with Bristlecone, and feel that learning to build and operate devices at this level of performance is an exciting challenge! We look forward to sharing the results and allowing collaborators to run experiments in the future.