What Is Zero UI? (And Why Is It Crucial To The Future Of Design?)

 

What does UI design look like after screens go away? Fjord’s Andy Goodman explains.

For better or worse, a large amount of design work these days is visual. That makes sense, since the most essential products we interact with have screens. But as the internet of things surrounds us with devices that can hear our words, anticipate our needs, and sense our gestures, what does that mean for the future of design, especially as those screens go away?

Last week at San Francisco’s SOLID Conference, Andy Goodman, group director of Fjord, shared his thoughts on what he thinks the new paradigm of design will be like when our interfaces are no longer constrained by screens, and instead turn to haptic, automated, and ambient interfaces. He calls it Zero UI. We talked to him about what it meant.

What Is Zero UI?

Zero UI isn’t really a new idea. If you’ve ever used an Amazon Echo, changed a channel by waving at a Microsoft Kinect, or setup a Nest thermostat, you’ve already used a device that could be considered part of Goodman’s Zero UI thinking. It’s all about getting away from the touchscreen, and interfacing with the devices around us in more natural ways: haptics, computer vision, voice control, and artificial intelligence. Zero UI is the design component of all these technologies, as they pertain to what we call the internet of things.

“If you look at the history of computing, starting with the jacquard loom in 1801, humans have always had to interact with machines in a really abstract, complex way,” Goodman says.

Over time, these methods have become less complex: the punch card gave way to machine code, machine code to the command line, command line to the GUI. But machines still force us to come to them on their terms, speaking their language. The next step is for machines to finally understand us on our own terms, in our own natural words, behaviors, and gestures. That’s what Zero UI is all about.

How Will Zero UI Change Design?

According to Goodman, Zero UI represents a whole new dimension for designers to wrestle with. Literally. He likens the designer’s leap from UI to Zero UI as similar to what happens in the novella Flatland: instead of just designing for two-dimensions—i.e. what a user is trying to do right now in a linear, predictable workflow—designers need to think about what a user is trying to do right now in any possible workflow.

Take voice control, for instance. Right now, voice control through something like Amazon Echo or Siri is relatively simple: a user asks a question (“Who was the 4th president of the United States?”) or makes a statement (“Call my husband”) and the device acts upon that single request. But ask Siri to “Message my husband the 4th president of the United States, then tell me who the fifth is?” and it’ll barf all over itself. To build services and devices that can translate a stream-of-consciousness command like that, designers will need to think non-linearly. They’ll need to be able to build a system capable of adjusting to anything on the fly.

“It’s like learning to play 3-D chess,” Goodman laughs. “We need to think away from linear workflows, and towards multi-dimensional thought process.”

Zero UI Will Require Designers To Rely On Data And AI

Whereas interface designers right now live in apps like InDesign and Adobe Illustrator, the non-linear design problems of zero UI will require vastly different tools, and skill sets.

“We might have to design in databases, or lookup tables, or spreadsheets,” Goodman says, explaining that data, not intuition, will become a designer’s most valuable asset. “Designers will have to become experts in science, biology, and psychology to create these devices… stuff we don’t have to think about when our designs are constrained by screens.”

For example, let’s say you have a television that can sense gestures. Depending on who is standing in front of that TV, the gestures it needs to understand to do something as simple as turn up the volume might be radically different: a 40-year-old who grew up in the age of analog interfaces might twist an imaginary dial in mid-air, while a millennial might jerk their thumb up. A zero UI stereo will need to have access to a lot of behavioral data, let alone the processing power to decode them.”

“As we move away from screens, a lot of our interfaces will have to become more automatic, anticipatory, and predictive,” Goodman says. A good example of this sort of device, Goodman says, is the Nest: you set its thermostat once, and then it learns to anticipate what you want based on how you interact with it from there.

What’s After Zero UI?

Although Goodman is serious about the fact that screens are going to stop being the primary way we interact with the devices around us, he’s the first to admit that the Zero UI name isn’t meant to be taken literally. “It’s really meant to be a provocation,” Goodman admits. “There are always going to be user interfaces in some form of another, but this is about getting away from thinking about everything in terms of screens.”

But if Goodman’s right, and the entire history of computing is less a progression of mere technological advancements, and more a progression of advancements in the way we are able to communicate with machines, then what happens after we achieve Zero UI? What happens when our devices finally understand us better than we understand ourselves? Is anything we want to do with an app, gadget, or device is just a shrug, a grunt, or a caress away?

“I’m really into all that singularity stuff,” Goodman laughs. “Once you get to the point that computers understand us, the next step is that computers get embedded in us, and we become the next UI.”

Correction: An earlier version of this article misidentified Goodman’s title. It has been changed.


Fast Company , Read Full Story

(166)