Looking for a good snapshot of the state of AI research? Cloud giant Google recently reviewed its 2017 AI research and application highlights in a two-part blog. While hardly comprehensive, it’s a worthwhile, fast read for AI watchers. Few companies are as actively involved in AI as Google so a bit of chest thumping seems warranted. The work is also full of links to supporting resources (papers, videos & sound clips, other blogs).
“In Part 1 of this blog post, we shared some of our work in 2017 related to our broader research, from designing new machine learning algorithms and techniques to understanding them, as well as sharing data, software, and hardware with the community. In this [second] post, we’ll dive into the research we do in some specific domains such as healthcare, robotics, creativity, fairness and inclusion, as well as share a little more about us,” wrote Jeff Dean, Google senior fellow and member of the Google Brain Team.
Here’s a snippet of one section on machine learning:
“The use of machine learning to replace traditional heuristics in computer systems also greatly interests us. We have shown how to use reinforcement learning to make placement decisions for mapping computational graphs onto a set of computational devices that are better than human experts. With other colleagues in Google Research, we have shown in “The Case for Learned Index Structures” that neural networks can be both faster and much smaller than traditional data structures such as B-trees, hash tables, and Bloom filters. We believe that we are just scratching the surface in terms of the use of machine learning in core computer systems, as outlined in a NIPS workshop talk on Machine Learning for Systems and Systems for Machine Learning.”
It’s interesting to watch as the cloud giant has transformed from mostly technology consumers, at least in hardware, into vast technology invention machines even in the processor area. They still buy a lot but they invent a lot too, as noted here.
“We provided design input to Google’s Platforms team and they designed and produced our first generation Tensor Processing Unit (TPU): a single-chip ASIC designed to accelerate inference for deep learning models (inference is the use of an already-trained neural network, and is distinct from training). This first-generation TPU has been deployed in our data centers for three years, and it has been used to power deep learning models on every Google Search query, for Google Translate, for understanding images in Google Photos, for the AlphaGo matches against Lee Sedol and Ke Jie, and for many other research and product uses. In June, we published a paper at ISCA 2017, showing that this first-generation TPU was 15X – 30X faster than its contemporary GPU or CPU counterparts, with performance/Watt about 30X – 80X better.”
Avoiding bias in another ongoing challenge and not surprisingly an area where Google is active:
“As ML plays an increasing role in technology, considerations of inclusivity and fairness grow in importance. The Brain team and PAIR have been working hard to make progress in these areas. We’ve published on how to avoid discrimination in ML systems via causal reasoning, the importance of geodiversity in open datasets, and posted an analysis of an open dataset to understand diversity and cultural differences. We’ve also been working closely with the Partnership on AI, a cross-industry initiative, to help make sure that fairness and inclusion are promoted as goals for all ML practitioners.”
You get the idea. There’s fair bit more in the blog and much of the value is reviewing the attached (via links) supporting work. Yes, Google indulges in a bit of bragging here, but then again, why not.
Link to blog: https://research.googleblog.com