At I/O 2017, Google doubled down on a future built on AI
A few years ago, when a cadre of dudes jumped out of a zeppelin wearing Google Glass, nearly everyone watching had a “holy shit” moment. Company execs had just run through a slew of big, consumer-facing announcements, and then Sergey Brin threw the presentation to a live video streamed by people hurtling through the air. In that moment, Google wasn’t just a terribly smart company — it was a terribly cool one, too.
Fast forward a few years, and I/O now seems a little subdued. Apart from the crowd clamoring to see LCD Soundsystem run through a set, the energy in the air seemed calmer than before. Last year’s fun, open-air demo areas were gone too, replaced mostly by air-conditioned domes after attendees last year complained about the heat.
So yes, I/O is different now. Google doesn’t have the exact same priorities for its developer conference as it used to. And that’s OK. Arguably, things are as they should be. The shiny new gadgets can come later — Google’s message with I/O 2017 is about all about weaving the fabric of the future, with artificial intelligence as the most crucial thread.
Just look at the things Google played up in its keynotes. The search giant made it easier for hardware makers to bake the Google Assistant into their products, for one. (Oh, and there’s iOS support now, too). That means broader adoption of a feature whose primary purpose is to understand you, be it through via your voice or the bits of information that form a digital outline of your life.
The level of computational intelligence required to understand a person writing or speaking (in several new languages, no less) is crucially important as the interfaces on our devices become more personal. And of course, there’s Google Lens, which seeks to understand the physical artifacts in front of a camera and present the user with ways to interact with it.
The technological concept isn’t new — just look at Google Goggles. What’s new is the extent to which we’re able to find meaning in data that looks like noise. Consider the announcement of next-generation Tensor Processing Units (woven into Google’s cloud no less). It more-or-less means the massive data crunching needed to train AIs to play poker or tell hot dogs from non-hot dogs won’t take as much time.
TensorFlow is already one of the leading platforms for teaching AIs, and doubling-down on hardware that makes such growth easier is both impactful and financially savvy. TensorFlow will likely shape the way you get things done whether you’re aware of it or not. With TensorFlow Lite — a scaled-down version of the software library meant to run on mobile devices — the company believes it has found a way to make Android phones even better at interacting with you
“We think these new capabilities will help power a next generation of on-device speech processing, visual search, augmented reality, and more,” said Android engineering chief Dave Burke on-stage.
The boundaries between some of these projects are mostly conceptual. Just look at Google’s push for standalone VR headsets. I personally think Google will run into trouble positioning these devices between low-cost fare like Gear VR and the premium, Oculus Rift-level stuff. But squeezing all that intelligence and processing power into a single device is endlessly intriguing, and it doesn’t take a huge leap to see how Google’s Lens and TensorFlow Lite could make future self-contained headsets — ones that focus more on AR, at least — remarkably useful.
Google didn’t rock our faces with new phones, it offered a vision of the future that feels both full of potential and surprisingly imminent. Tell me that’s not exciting.
For all the latest news and updates from Google I/O 2017, follow along here
(27)