Tag Archives: conference

Google at CVPR 2017



From July 21-26, Honolulu, Hawaii hosts the 2017 Conference on Computer Vision and Pattern Recognition (CVPR 2017), the premier annual computer vision event comprising the main conference and several co-located workshops and tutorials. As a leader in computer vision research and a Platinum Sponsor, Google will have a strong presence at CVPR 2017 — over 250 Googlers will be in attendance to present papers and invited talks at the conference, and to organize and participate in multiple workshops.

If you are attending CVPR this year, please stop by our booth and chat with our researchers who are actively pursuing the next generation of intelligent systems that utilize the latest machine learning techniques applied to various areas of machine perception. Our researchers will also be available to talk about and demo several recent efforts, including the technology behind Headset Removal for Virtual and Mixed Reality, Image Compression with Neural Networks, Jump, TensorFlow Object Detection API and much more.

You can learn more about our research being presented at CVPR 2017 in the list below (Googlers highlighted in blue).

Organizing Committee
Corporate Relations Chair - Mei Han
Area Chairs include - Alexander Toshev, Ce Liu, Vittorio Ferrari, David Lowe

Papers
Training object class detectors with click supervision
Dim Papadopoulos, Jasper Uijlings, Frank Keller, Vittorio Ferrari

Unsupervised Pixel-Level Domain Adaptation With Generative Adversarial Networks
Konstantinos Bousmalis, Nathan Silberman, David Dohan, Dumitru Erhan, Dilip Krishnan

BranchOut: Regularization for Online Ensemble Tracking With Convolutional Neural Networks Bohyung Han, Jack Sim, Hartwig Adam

Enhancing Video Summarization via Vision-Language Embedding
Bryan A. Plummer, Matthew Brown, Svetlana Lazebnik

Learning by Association — A Versatile Semi-Supervised Training Method for Neural Networks Philip Haeusser, Alexander Mordvintsev, Daniel Cremers

Context-Aware Captions From Context-Agnostic Supervision
Ramakrishna Vedantam, Samy Bengio, Kevin Murphy, Devi Parikh, Gal Chechik

Spatially Adaptive Computation Time for Residual Networks
Michael Figurnov, Maxwell D. Collins, Yukun Zhu, Li Zhang, Jonathan HuangDmitry Vetrov, Ruslan Salakhutdinov

Xception: Deep Learning With Depthwise Separable Convolutions
François Chollet

Deep Metric Learning via Facility Location
Hyun Oh Song, Stefanie Jegelka, Vivek Rathod, Kevin Murphy

Speed/Accuracy Trade-Offs for Modern Convolutional Object Detectors
Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy

Synthesizing Normalized Faces From Facial Identity Features
Forrester Cole, David Belanger, Dilip Krishnan, Aaron Sarna, Inbar Mosseri, William T. Freeman

Towards Accurate Multi-Person Pose Estimation in the Wild
George Papandreou, Tyler Zhu, Nori Kanazawa, Alexander Toshev, Jonathan Tompson, Chris Bregler, Kevin Murphy

GuessWhat?! Visual Object Discovery Through Multi-Modal Dialogue
Harm de Vries, Florian Strub, Sarath Chandar, Olivier Pietquin, Hugo Larochelle, Aaron Courville

Learning discriminative and transformation covariant local feature detectors
Xu Zhang, Felix X. Yu, Svebor Karaman, Shih-Fu Chang

Full Resolution Image Compression With Recurrent Neural Networks
George Toderici, Damien Vincent, Nick Johnston, Sung Jin Hwang, David Minnen, Joel Shor, Michele Covell

Learning From Noisy Large-Scale Datasets With Minimal Supervision
Andreas Veit, Neil Alldrin, Gal Chechik, Ivan Krasin, Abhinav Gupta, Serge Belongie

Unsupervised Learning of Depth and Ego-Motion From Video
Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe

Cognitive Mapping and Planning for Visual Navigation
Saurabh Gupta, James Davidson, Sergey Levine, Rahul Sukthankar, Jitendra Malik

Fast Fourier Color Constancy
Jonathan T. Barron, Yun-Ta Tsai

On the Effectiveness of Visible Watermarks
Tali Dekel, Michael Rubinstein, Ce Liu, William T. Freeman

YouTube-BoundingBoxes: A Large High-Precision Human-Annotated Data Set for Object Detection in Video
Esteban Real, Jonathon Shlens, Stefano Mazzocchi, Xin Pan, Vincent Vanhoucke

Workshops
Deep Learning for Robotic Vision
Organizers include: Anelia Angelova, Kevin Murphy
Program Committee includes: George Papandreou, Nathan Silberman, Pierre Sermanet

The Fourth Workshop on Fine-Grained Visual Categorization
Organizers include: Yang Song
Advisory Panel includes: Hartwig Adam
Program Committee includes: Anelia Angelova, Yuning Chai, Nathan Frey, Jonathan Krause, Catherine Wah, Weijun Wang

Language and Vision Workshop
Organizers include: R. Sukthankar

The First Workshop on Negative Results in Computer Vision
Organizers include: R. Sukthankar, W. Freeman, J. Malik

Visual Understanding by Learning from Web Data
General Chairs include: Jesse Berent, Abhinav Gupta, Rahul Sukthankar
Program Chairs include: Wei Li

YouTube-8M Large-Scale Video Understanding Challenge
General Chairs: Paul Natsev, Rahul Sukthankar
Program Chairs: Joonseok Lee, George Toderici
Challenge Organizers: Sami Abu-El-Haija, Anja Hauth, Nisarg Kothari, Hanhan Li, Sobhan Naderi Parizi, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan, Jian Wang

Open Source at Google I/O 2017

One of the best parts of Google I/O every year is the chance to meet with the developers and community organizers from all over the world. It's a unique opportunity to have candid one-on-one conversations about the products and technologies we all love.

This year, I/O features a Community Lounge for attendees to relax, hangout, and play with neat experiments and games. It also features several mini-meetups during which you can chat with Googlers on a variety of topics.

Chris DiBona and Will Norris from the Google Open Source Programs Office will be around Thursday and Friday to talk about anything and everything open source, including our student outreach programs and the new Google Open Source website. If you're at Google I/O this year, make sure to drop by and say hello. Find dates, times, and other details in the Community Lounge schedule.

By Josh Simmons, Google Open Source

Saddle up and meet us in Texas for OSCON 2017

Program chairs at OSCON 2016, left to right:
Kelsey Hightower, Scott Hanselman, Rachel Roumeliotis.
Photo used with permission from O'Reilly Media.
The Google Open Source team is getting ready to hit the road and join the open source panoply that is Open Source Convention (OSCON). This year the event runs May 8-11 in Austin, Texas and is preceded on May 6-7 by the free-to-attend Community Leadership Summit (CLS).

You’ll find our team and many other Googlers throughout the week on the program schedule and in the expo hall at booth #401. We’ve got a full rundown of our schedule below, but you can swing by the expo hall anytime to discuss Google Cloud Platform, our open source outreach programs, the projects we’ve open-sourced including Kubernetes, TensorFlow, gRPC, and even our recently released open source documentation.

Of course, you’ll also find our very own Kelsey Hightower everywhere since he is serving as one of three OSCON program chairs for the second year in a row.

Are you a student, educator, project maintainer, community leader, past or present participant in Google Summer of Code or Google Code-in? Join us for lunch at the Google Summer of Code table in the conference lunch area on Wednesday afternoon. We’ll discuss our outreach programs which help open source communities grow while providing students with real world software development experience. We’ll be updating this blog post and tweeting with details closer to the date.

Without further ado, here’s our schedule of events:

Monday, May 8th (Tutorials)

Tuesday, May 9th (Tutorials)

Wednesday, May 10th (Sessions)
12:30pm Google Summer of Code and Google Code-in lunch

Thursday, May 11th (Sessions)

We look forward to seeing you deep in the heart of Texas at OSCON 2017!

By Josh Simmons, Google Open Source

Research at Google and ICLR 2017



This week, Toulon, France hosts the 5th International Conference on Learning Representations (ICLR 2017), a conference focused on how one can learn meaningful and useful representations of data for Machine Learning. ICLR includes conference and workshop tracks, with invited talks along with oral and poster presentations of some of the latest research on deep learning, metric learning, kernel learning, compositional models, non-linear structured prediction, and issues regarding non-convex optimization.

At the forefront of innovation in cutting-edge technology in Neural Networks and Deep Learning, Google focuses on both theory and application, developing learning approaches to understand and generalize. As Platinum Sponsor of ICLR 2017, Google will have a strong presence with over 50 researchers attending (many from the Google Brain team and Google Research Europe), contributing to and learning from the broader academic research community by presenting papers and posters, in addition to participating on organizing committees and in workshops.

If you are attending ICLR 2017, we hope you'll stop by our booth and chat with our researchers about the projects and opportunities at Google that go into solving interesting problems for billions of people. You can also learn more about our research being presented at ICLR 2017 in the list below (Googlers highlighted in blue).

Area Chairs include:
George Dahl, Slav Petrov, Vikas Sindhwani

Program Chairs include:
Hugo Larochelle, Tara Sainath

Contributed Talks
Understanding Deep Learning Requires Rethinking Generalization (Best Paper Award)
Chiyuan Zhang*, Samy Bengio, Moritz Hardt, Benjamin Recht*, Oriol Vinyals

Semi-Supervised Knowledge Transfer for Deep Learning from Private Training Data (Best Paper Award)
Nicolas Papernot*, Martín Abadi, Úlfar Erlingsson, Ian Goodfellow, Kunal
Talwar


Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic
Shixiang (Shane) Gu*, Timothy Lillicrap, Zoubin Ghahramani, Richard E.
Turner, Sergey Levine


Neural Architecture Search with Reinforcement Learning
Barret Zoph, Quoc Le

Posters
Adversarial Machine Learning at Scale
Alexey Kurakin, Ian J. Goodfellow, Samy Bengio

Capacity and Trainability in Recurrent Neural Networks
Jasmine Collins, Jascha Sohl-Dickstein, David Sussillo

Improving Policy Gradient by Exploring Under-Appreciated Rewards
Ofir Nachum, Mohammad Norouzi, Dale Schuurmans

Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc LeGeoffrey Hinton, Jeff Dean

Unrolled Generative Adversarial Networks
Luke Metz, Ben Poole*, David Pfau, Jascha Sohl-Dickstein

Categorical Reparameterization with Gumbel-Softmax
Eric Jang, Shixiang (Shane) Gu*, Ben Poole*

Decomposing Motion and Content for Natural Video Sequence Prediction
Ruben Villegas, Jimei Yang, Seunghoon Hong, Xunyu Lin, Honglak Lee

Density Estimation Using Real NVP
Laurent Dinh*, Jascha Sohl-Dickstein, Samy Bengio

Latent Sequence Decompositions
William Chan*, Yu Zhang*, Quoc Le, Navdeep Jaitly*

Learning a Natural Language Interface with Neural Programmer
Arvind Neelakantan*, Quoc V. Le, Martín Abadi, Andrew McCallum*, Dario
Amodei*

Deep Information Propagation
Samuel Schoenholz, Justin Gilmer, Surya Ganguli, Jascha Sohl-Dickstein

Identity Matters in Deep Learning
Moritz Hardt, Tengyu Ma

A Learned Representation For Artistic Style
Vincent Dumoulin*, Jonathon Shlens, Manjunath Kudlur

Adversarial Training Methods for Semi-Supervised Text Classification
Takeru Miyato, Andrew M. Dai, Ian Goodfellow

HyperNetworks
David Ha, Andrew Dai, Quoc V. Le

Learning to Remember Rare Events
Lukasz Kaiser, Ofir Nachum, Aurko Roy*, Samy Bengio

Workshop Track Abstracts
Particle Value Functions
Chris J. Maddison, Dieterich Lawson, George Tucker, Nicolas Heess, Arnaud Doucet, Andriy Mnih, Yee Whye Teh

Neural Combinatorial Optimization with Reinforcement Learning
Irwan Bello, Hieu Pham, Quoc V. Le, Mohammad Norouzi, Samy Bengio

Short and Deep: Sketching and Neural Networks
Amit Daniely, Nevena Lazic, Yoram Singer, Kunal Talwar

Explaining the Learning Dynamics of Direct Feedback Alignment
Justin Gilmer, Colin Raffel, Samuel S. Schoenholz, Maithra Raghu, Jascha Sohl-Dickstein

Training a Subsampling Mechanism in Expectation
Colin Raffel, Dieterich Lawson

Tuning Recurrent Neural Networks with Reinforcement Learning
Natasha Jaques*, Shixiang (Shane) Gu*, Richard E. Turner, Douglas Eck

REBAR: Low-Variance, Unbiased Gradient Estimates for Discrete Latent Variable Models
George Tucker, Andriy Mnih, Chris J. Maddison, Jascha Sohl-Dickstein

Adversarial Examples in the Physical World
Alexey Kurakin, Ian Goodfellow, Samy Bengio

Regularizing Neural Networks by Penalizing Confident Output Distributions
Gabriel Pereyra, George Tucker, Jan Chorowski, Lukasz Kaiser, Geoffrey Hinton

Unsupervised Perceptual Rewards for Imitation Learning
Pierre Sermanet, Kelvin Xu, Sergey Levine

Changing Model Behavior at Test-time Using Reinforcement Learning
Augustus Odena, Dieterich Lawson, Christopher Olah

* Work performed while at Google
† Work performed while at OpenAI

By maintainers, for maintainers: Wontfix_Cabal

The Google Open Source Programs Office likes to highlight events we support, organize, or speak at. In this case, Google’s own Jess Frazelle was responsible for running a unique event for open source maintainers.

This year I helped organize the first inaugural Wontfix_Cabal. The conference was organized by open source software maintainers for open source software maintainers. Our initial concept was an unconference where attendees could discuss topics candidly with their peers from other open source communities.

The idea for the event stemmed from the response to a blog post I published about closing pull requests. The response was overwhelming, with many maintainers commiserating and sharing lessons they had learned. It seemed like we could all learn a lot from our peers in other projects -- if we had the space to do so -- and it was clear that people needed a place to vent.

Major thanks to Katrina Owen and Brandon Keepers from GitHub who jumped right in and provided the venue we needed to make this happen. Without their support this would’ve never become a reality!

It was an excellent first event and the topics discussed were wide ranging, including:
  • How to deal with unmaintained projects
  • Collecting metrics to judge project health
  • Helping newcomers
  • Dealing with backlogs
  • Coping with, and minimizing, toxic behavior in our communities


The discussion around helping newcomers focused on creating communities with welcoming and productive cultures right from the start. I was fascinated to learn that some projects pre-fill issues before going public so as to set the tone for the future of the project. Another good practice is clearly defining how one becomes a maintainer or gets commit access. There should be clear rules in place so people know what they have to do to succeed.

Another discussion I really liked focused on “saying no.” Close fast and close early was a key takeaway. There’s no sense in letting a contribution sit waiting when you know it will never be accepted. Multiple projects found that having a bot give the hard news was always better than having the maintainer do it. This way it is not personal, just a regular part of the process.

One theme seen in multiple sessions: “Being kind is not the same as being nice.” The distinction here is that being nice comes from a place of fear and leads people to bend over backwards just to please. Being kind comes from a place of strength, from doing the right thing.

Summaries of many of the discussions have been added to the GitHub repo if you would like to read more.

After the event concluded many maintainers got right to work, putting what they had learned into practice. For instance, Rust got help from the Google open source fuzzing team.


Our goal was to put together a community of maintainers that could support and learn from each other. When I saw Linux kernel maintainers talking to people who work on Node and JavaScript, I knew we had achieved that goal. Laura Abbott, one of those kernel developers, wrote a blog post about the experience.

Not only was the event useful, it was also a lot of fun. Meeting maintainers, people who care a great deal about open source software, from such a diverse group of projects was great. Overall, I think our initial run was a success! Follow us on Twitter to find out about future events.

By Jess Frazelle, Software Engineer

Googlers on the road: FOSDEM 2017

The new year is off to an excellent start as we wrap up the 7th year of Google Code-in, ramp up for the 13th year of Google Summer of Code, and return from connecting with our compatriots in the open source community down under at Linux.conf.au. Next up? We’re headed to FOSDEM, Europe’s famed non-commercial and volunteer-organized open source conference.

FOSDEM_logo.png
FOSDEM logo licensed under CC BY.

FOSDEM is hosted in Brussels on the Université libre de Bruxelles campus and runs the weekend of February 4-5. It’s a unique event in the spirit of the free and open source software and is free to the public. This year they are expecting 8,000+ attendees.

We’re looking forward to talking face-to-face with some of the thousands of former students, mentors and organization administrators who have participated in our student programs. A few of them will even be giving talks about their recent Google Summer of Code experience.

If you’d like to say hello or chat about our programs, you’ll be sure to find a Googler or two at our table. You’ll also find a number of Googlers in the program schedule:

Saturday, February 4th

2:00pm    Bazel: How to build at Google scale by Klaus Aehlig
3:25pm    Copyleft in Commerce: How GPLv3 keeps Samba relevant in the marketplace by Jeremy Allison

Sunday, February 5th

10:40am  gRPC 101: Building fast and efficient microservices by Ray Tsang
10:50am  Is the GPL a copyright license or a contract under U.S. law? by Max Sills
12:45pm  The state of Go: What to expect in Go 1.8 by Francesc Campoy
1:00pm    Analyze terabytes of OS code with one query by Felipe Hoffa (more info)
2:50pm    Like the ants: Turn individuals into a large contributing community by Dan Franc

See you there!

By Josh Simmons, Open Source Programs Office

Open source down under: Linux.conf.au 2017

It’s a new year and open source enthusiasts from around the globe are preparing to gather at the edge of the world for Linux.conf.au 2017. Among those preparing are Googlers, including some of us from the Open Source Programs Office.

This year Linux.conf.au is returning to Hobart, the riverside capital of Tasmania, home of Australia’s famous Tasmanian devils, running five days between January 16 and 20. The theme is the “Future of Open Source.”
Circle_DevilTuz.png
Tuz, a Tasmanian devil sporting a penguin beak, is the Linux.conf.au mascot.
(Artwork by Tania Walker licensed under CC BY-SA.)
The conference, which began in 1999 and is community organized, is well equipped to explore that theme which is reflected in the program schedule and miniconfs.

You’ll find Googlers speaking throughout the week, as well as participating in the hallway track. Don’t miss our Birds of a Feather session if you’re a student, educator, project maintainer, or otherwise interested in talking about outreach and student programs like Google Summer of Code and Google Code-in.

Monday, January 16th
12:20pm The Sound of Silencing by Julien Goodwin
4:35pm   Year of the Linux Desktop? by Jessica Frazelle

Tuesday, January 17th
All day    Community Leadership Summit X at LCA

Wednesday, January 18th
2:15pm   Community Building Beyond the Black Stump by Josh Simmons
4:35pm   Contributing to and Maintaining Large Scale Open Source Projects by Jessica Frazelle

Thursday, January 19th
4:35pm   Using Python for creating hardware to record FOSS conferences! by Tim Ansell

Friday, January 20th
1:20pm   Linux meets Kubernetes by Vishnu Kannan

Not able to make it to the conference? Keynotes and sessions will be livestreamed, and you can always find the session recordings online after the event.

We’ll see you there!

By Josh Simmons, Open Source Programs Office

NIPS 2016 & Research at Google



This week, Barcelona hosts the 30th Annual Conference on Neural Information Processing Systems (NIPS 2016), a machine learning and computational neuroscience conference that includes invited talks, demonstrations and oral and poster presentations of some of the latest in machine learning research. Google will have a strong presence at NIPS 2016, with over 280 Googlers attending in order to contribute to and learn from the broader academic research community by presenting technical talks and posters, in addition to hosting workshops and tutorials.

Research at Google is at the forefront of innovation in Machine Intelligence, actively exploring virtually all aspects of machine learning including classical algorithms as well as cutting-edge techniques such as deep learning. Focusing on both theory as well as application, much of our work on language understanding, speech, translation, visual processing, ranking, and prediction relies on Machine Intelligence. In all of those tasks and many others, we gather large volumes of direct or indirect evidence of relationships of interest, and develop learning approaches to understand and generalize.

If you are attending NIPS 2016, we hope you’ll stop by our booth and chat with our researchers about the projects and opportunities at Google that go into solving interesting problems for billions of people, and to see demonstrations of some of the exciting research we pursue. You can also learn more about our work being presented at NIPS 2016 in the list below (Googlers highlighted in blue).

Google is a Platinum Sponsor of NIPS 2016.

Organizing Committee
Executive Board includes: Corinna Cortes, Fernando Pereira
Advisory Board includes: John C. Platt
Area Chairs include: John Shlens, Moritz Hardt, Navdeep JaitlyHugo Larochelle, Honglak Lee, Sanjiv Kumar, Gal Chechik

Invited Talk
Dynamic Legged Robots
Marc Raibert

Accepted Papers:
Boosting with Abstention
Corinna Cortes, Giulia DeSalvo, Mehryar Mohri

Community Detection on Evolving Graphs
Stefano Leonardi, Aris Anagnostopoulos, Jakub Łącki, Silvio Lattanzi, Mohammad Mahdian

Linear Relaxations for Finding Diverse Elements in Metric Spaces
Aditya Bhaskara, Mehrdad Ghadiri, Vahab Mirrokni, Ola Svensson

Nearly Isometric Embedding by Relaxation
James McQueen, Marina Meila, Dominique Joncas

Optimistic Bandit Convex Optimization
Mehryar Mohri, Scott Yang

Reward Augmented Maximum Likelihood for Neural Structured Prediction
Mohammad Norouzi, Samy Bengio, Zhifeng Chen, Navdeep Jaitly, Mike Schuster, Yonghui Wu, Dale Schuurmans

Stochastic Gradient MCMC with Stale Gradients
Changyou Chen, Nan Ding, Chunyuan Li, Yizhe Zhang, Lawrence Carin

Unsupervised Learning for Physical Interaction through Video Prediction
Chelsea Finn*, Ian Goodfellow, Sergey Levine

Using Fast Weights to Attend to the Recent Past
Jimmy Ba, Geoffrey Hinton, Volodymyr Mnih, Joel Leibo, Catalin Ionescu

A Credit Assignment Compiler for Joint Prediction
Kai-Wei Chang, He He, Stephane Ross, Hal III

A Neural Transducer
Navdeep Jaitly, Quoc Le, Oriol Vinyals, Ilya Sutskever, David Sussillo, Samy Bengio

Attend, Infer, Repeat: Fast Scene Understanding with Generative Models
S. M. Ali Eslami, Nicolas Heess, Theophane Weber, Yuval Tassa, David Szepesvari, Koray Kavukcuoglu, Geoffrey Hinton

Bi-Objective Online Matching and Submodular Allocations
Hossein Esfandiari, Nitish Korula, Vahab Mirrokni

Combinatorial Energy Learning for Image Segmentation
Jeremy Maitin-Shepard, Viren Jain, Michal Januszewski, Peter Li, Pieter Abbeel

Deep Learning Games
Dale Schuurmans, Martin Zinkevich

DeepMath - Deep Sequence Models for Premise Selection
Geoffrey Irving, Christian Szegedy, Niklas Een, Alexander Alemi, François Chollet, Josef Urban

Density Estimation via Discrepancy Based Adaptive Sequential Partition
Dangna Li, Kun Yang, Wing Wong

Domain Separation Networks
Konstantinos Bousmalis, George Trigeorgis, Nathan Silberman Dilip KrishnanDumitru Erhan

Fast Distributed Submodular Cover: Public-Private Data Summarization
Baharan Mirzasoleiman, Morteza Zadimoghaddam, Amin Karbasi

Satisfying Real-world Goals with Dataset Constraints
Gabriel Goh, Andrew Cotter, Maya Gupta, Michael P Friedlander

Can Active Memory Replace Attention?
Łukasz Kaiser, Samy Bengio

Fast and Flexible Monotonic Functions with Ensembles of Lattices
Kevin Canini Andy Cotter Maya Gupta Mahdi Fard Jan Pfeifer

Launch and Iterate: Reducing Prediction Churn
Quentin Cormier, Mahdi Fard, Kevin Canini, Maya Gupta

On Mixtures of Markov Chains
Rishi Gupta, Ravi Kumar, Sergei Vassilvitskii

Orthogonal Random Features
Felix Xinnan Yu Ananda Theertha Suresh Krzysztof Choromanski Dan Holtmann-Rice
Sanjiv Kumar


Perspective Transformer Nets: Learning Single-View 3D Object Reconstruction without 3D
Supervision
Xinchen Yan, Jimei Yang, Ersin Yumer, Yijie Guo, Honglak Lee

Structured Prediction Theory Based on Factor Graph Complexity
Corinna Cortes, Vitaly Kuznetsov, Mehryar Mohri, Scott Yang

Toward Deeper Understanding of Neural Networks: The Power of Initialization and a Dual View on Expressivity
Amit Daniely, Roy Frostig, Yoram Singer

Demonstrations
Interactive musical improvisation with Magenta
Adam Roberts, Sageev Oore, Curtis Hawthorne, Douglas Eck

Content-based Related Video Recommendation
Joonseok Lee

Workshops, Tutorials and Symposia
Advances in Approximate Bayesian Inference
Advisory Committee includes: Kevin P. Murphy
Invited Speakers include: Matt Johnson
Panelists include: Ryan Sepassi

Adversarial Training
Accepted Authors: Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein, Augustus Odena, Christopher Olah, Jonathon Shlens

Bayesian Deep Learning
Organizers include: Kevin P. Murphy
Accepted Authors include: Rif A. Saurous, Eugene Brevdo, Kevin Murphy

Brains & Bits: Neuroscience Meets Machine Learning
Organizers include: Jascha Sohl-Dickstein

Connectomics II: Opportunities & Challanges for Machine Learning
Organizers include: Viren Jain

Constructive Machine Learning
Invited Speakers include: Douglas Eck

Continual Learning & Deep Networks
Invited Speakers include: Honglak Lee

Deep Learning for Action & Interaction
Organizers include: Sergey Levine
Invited Speakers include: Honglak Lee
Accepted Authors include: Pararth Shah, Dilek Hakkani-Tur, Larry Heck

End-to-end Learning for Speech and Audio Processing
Invited Speakers include: Tara Sainath
Accepted Authors include: Brian Patton, Yannis Agiomyrgiannakis, Michael Terry, Kevin Wilson, Rif A. Saurous, D. Sculley

Extreme Classification: Multi-class & Multi-label Learning in Extremely Large Label Spaces
Organizers include: Samy Bengio

Interpretable Machine Learning for Complex Systems
Invited Speaker: Honglak Lee
Accepted Authors include: Daniel Smilkov, Nikhil Thorat, Charles Nicholson, Emily Reif, Fernanda Viegas, Martin Wattenberg

Large Scale Computer Vision Systems
Organizers include: Gal Chechik

Machine Learning Systems
Invited Speakers include: Jeff Dean

Nonconvex Optimization for Machine Learning: Theory & Practice
Organizers include: Hossein Mobahi

Optimizing the Optimizers
Organizers include: Alex Davies

Reliable Machine Learning in the Wild
Accepted Authors: Andres Medina, Sergei Vassilvitskii

The Future of Gradient-Based Machine Learning Software
Invited Speakers: Jeff Dean, Matt Johnson

Time Series Workshop
Organizers include: Vitaly Kuznetsov
Invited Speakers include: Mehryar Mohri

Theory and Algorithms for Forecasting Non-Stationary Time Series
Tutorial Organizers: Vitaly Kuznetsov, Mehryar Mohri

Women in Machine Learning
Invited Speakers include: Maya Gupta



* Work done as part of the Google Brain team

ACL 2016 & Research at Google



This week, Berlin hosts the 2016 Annual Meeting of the Association for Computational Linguistics (ACL 2016), the premier conference of the field of computational linguistics, covering a broad spectrum of diverse research areas that are concerned with computational approaches to natural language. As a leader in Natural Language Processing (NLP) and a Platinum Sponsor of the conference, Google will be on hand to showcase research interests that include syntax, semantics, discourse, conversation, multilingual modeling, sentiment analysis, question answering, summarization, and generally building better learners using labeled and unlabeled data, state-of-the-art modeling, and learning from indirect supervision.

Our systems are used in numerous ways across Google, impacting user experience in search, mobile, apps, ads, translate and more. Our work spans the range of traditional NLP tasks, with general-purpose syntax and semantic algorithms underpinning more specialized systems.
Our researchers are experts in natural language processing and machine learning, and combine methodological research with applied science, and our engineers are equally involved in long-term research efforts and driving immediate applications of our technology.

If you’re attending ACL 2016, we hope that you’ll stop by the booth to check out some demos, meet our researchers and discuss projects and opportunities at Google that go into solving interesting problems for billions of people. Learn more about Google research being presented at ACL 2016 below (Googlers highlighted in blue), and visit the Natural Language Understanding Team page at g.co/NLUTeam.

Papers
Generalized Transition-based Dependency Parsing via Control Parameters
Bernd Bohnet, Ryan McDonald, Emily Pitler, Ji Ma

Learning the Curriculum with Bayesian Optimization for Task-Specific Word Representation Learning
Yulia Tsvetkov, Manaal Faruqui, Wang Ling (Google DeepMind), Chris Dyer (Google DeepMind)

Morpho-syntactic Lexicon Generation Using Graph-based Semi-supervised Learning (TACL)
Manaal Faruqui, Ryan McDonald, Radu Soricut

Many Languages, One Parser (TACL)
Waleed Ammar, George Mulcaire, Miguel Ballesteros, Chris Dyer (Google DeepMind)*, Noah A. Smith

Latent Predictor Networks for Code Generation
Wang Ling (Google DeepMind), Phil Blunsom (Google DeepMind), Edward Grefenstette (Google DeepMind), Karl Moritz Hermann (Google DeepMind), Tomáš Kočiský (Google DeepMind), Fumin Wang (Google DeepMind), Andrew Senior (Google DeepMind)

Collective Entity Resolution with Multi-Focal Attention
Amir Globerson, Nevena Lazic, Soumen Chakrabarti, Amarnag Subramanya, Michael Ringgaard, Fernando Pereira

Plato: A Selective Context Model for Entity Resolution (TACL)
Nevena Lazic, Amarnag Subramanya, Michael Ringgaard, Fernando Pereira

WikiReading: A Novel Large-scale Language Understanding Task over Wikipedia
Daniel Hewlett, Alexandre Lacoste, Llion Jones, Illia Polosukhin, Andrew Fandrianto, Jay Han, Matthew Kelcey, David Berthelot

Stack-propagation: Improved Representation Learning for Syntax
Yuan Zhang, David Weiss

Cross-lingual Models of Word Embeddings: An Empirical Comparison
Shyam Upadhyay, Manaal Faruqui, Chris Dyer (Google DeepMind)Dan Roth

Globally Normalized Transition-Based Neural Networks (Outstanding Papers Session)
Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman GanchevSlav Petrov, Michael Collins

Posters
Cross-lingual projection for class-based language models
Beat Gfeller, Vlad Schogol, Keith Hall

Synthesizing Compound Words for Machine Translation
Austin Matthews, Eva Schlinger*, Alon Lavie, Chris Dyer (Google DeepMind)*

Cross-Lingual Morphological Tagging for Low-Resource Languages
Jan Buys, Jan A. Botha

Workshops
1st Workshop on Representation Learning for NLP
Keynote Speakers include: Raia Hadsell (Google DeepMind)
Workshop Organizers include: Edward Grefenstette (Google DeepMind), Phil Blunsom (Google DeepMind), Karl Moritz Hermann (Google DeepMind)
Program Committee members include: Tomáš Kočiský (Google DeepMind), Wang Ling (Google DeepMind), Ankur Parikh (Google), John Platt (Google), Oriol Vinyals (Google DeepMind)

1st Workshop on Evaluating Vector-Space Representations for NLP
Contributed Papers:
Problems With Evaluation of Word Embeddings Using Word Similarity Tasks
Manaal Faruqui, Yulia Tsvetkov, Pushpendre Rastogi, Chris Dyer (Google DeepMind)*

Correlation-based Intrinsic Evaluation of Word Vector Representations
Yulia Tsvetkov, Manaal Faruqui, Chris Dyer (Google DeepMind)

SIGFSM Workshop on Statistical NLP and Weighted Automata
Contributed Papers:
Distributed representation and estimation of WFST-based n-gram models
Cyril Allauzen, Michael Riley, Brian Roark

Pynini: A Python library for weighted finite-state grammar compilation
Kyle Gorman


* Work completed at CMU

Lessons from Professors’ Open Source Software Experience (POSSE) 2016


From Google Summer of Code to Google Code-in, the Open Source Programs Office does a lot to get students involved with open source. In order to learn more about supporting open source in academia, I attended the NSF funded Professors' Open Source Software Experience (POSSE) in Philadelphia. It was a great opportunity for us to better understand the challenges instructors face in weaving open source into their curriculum and hear solutions on how to bridge the gap.

Almost 30 university professors and community college lecturers attended the 3-day workshop. During the workshop, attendees worked in small groups getting hands on experience incorporating humanitarian free and open source software (HFOSS) into their teaching. Professors were able to talk, mingle and share best practices throughout the event.

The POSSE workshop is led by Heidi Ellis, Professor, Department of Computer Science and Information Technology at Western New England University, and Greg Hislop, Professor of Software Engineering and Senior Associate Dean for Academic Affairs at Drexel University. Heidi and Greg took over running POSSE five years after Red Hat began the program as an outreach effort to the higher education community. Red Hat continues as a collaborator in the effort. Around 40 university and community college professors participate in the program every year with over 100 individuals attending the workshop in the last four years.

Here are some of the challenges professors shared:
  • Very little guidance on how to bring FOSS into the classroom. No standard curriculum / syllabus available to reference. 
  • Time investment required to change the curriculum.
  • Will not be rewarded for teaching FOSS courses.
  • Will not get funds to travel for workshops/conferences unless it’s to present a paper at a conference.
  • Many administrations aren’t aware that adding open source is beneficial for students since more and more companies use open source and expect their new hires to be familiar with it.

The next POSSE will be Nov 17-19. Faculty who are interested in attending POSSE, please click here to apply.

We also discussed a number of open source programs that are currently working to engage students with open source software development:

Thanks to Heidi, Greg and the FOSS2Serve team for organizing POSSE 2016! We look forward to taking what we’ve learned and using it to better support FOSS education in academia.

By Feiran Helen Hu, Open Source Programs Office