Main image credit: © Ultrahaptics
There are no apps, there's nothing to click, and the remote control that's always three seconds behind has been tossed away for good. Welcome to the zero user interface, or zero UI, where there's just you, your personal digital assistant and the sound of your own voice.
The appearance of Amazon's digital assistant Alexa on a whole slew of gadgets, from the eponymous Echo and Dot to , even '', has put voice at the forefront in the consumer electronics industry. However, the voice revolution stretches far deeper than just being able to ask your light switch what time it is.
Expanding voice tech
It may be kicking off with the and , , and later this year, but the zero UI is set to not only expand voice tech into homebots, chatbots and voice biometrics, but embrace face recognition tech, gesture control and haptic feedback. So much so that we could soon be looking back and laughing about the time we used to talk to Alexa, Google Assistant and Siri.
How often does Alexa mis-hear you? Speech recognition software tends to look only for phrases if used in real time, or to process speech more accurately if it's given time to post-process.
“We are seeing a shift in the tech industry as we move away from touchpad technology towards speech as the main form of communication,” says Dr Hermann Hauser, co-founder of Amadeus Capital Partners, an investor in , a real-time system that claims to understand many languages with high accuracy. However, voice tech presently has major disadvantages.
Multi-tasking … and text?
We all multi-task, but Alexa doesn't. “If you could have multiple 'conversations' going on at the same time, in different stages, it would be a much better experience,” says Thomas Staven, Global head of Pre-sales, Corporate Product Management at , which has developed a digital assistant called .
Staven thinks that's exactly where digital assistants are headed, but also that the zero UI needs to be as flexible as possible to suit whatever environment people find themselves in.
“Sometimes voice is perfect – like in the privacy of your home, or in the car – but in a crowded office you cannot use voice as interaction,” he adds. “Another ‘UI’ like text should be available – we need to offer flexibility here.”
That's exactly what another company is working on – and it goes way beyond adding text.
What if you could 'communicate with Alexa, or a personal robot, using your hands?
“Voice is very powerful, and will be one of the primary interaction methods in the future, but there are a lot of things it will never be good at, like choosing between things, ” says Tom Carter, CTO at , which has developed a technology that offers mid-air touch using ultrasound.
“Soundwaves are just pressure waves moving through the air, and at specific points you get very high pressure, and low or normal pressure everywhere else,” explains Carter. “At the high-pressure spots there's enough force generated to displace the surface of your skin by gently pushing on it.”
This goes way beyond the haptic buzz you get from Apple Watch when a message comes in. Using an array of speakers, the soundwaves can be manipulated to change the type of vibration hands can feel, so all manner of clicks, dials, shapes, textures can be created.
“We can sculpt the acoustic field,” says Carter, who is currently developing the technology for a car manufacturer. “As you're driving along, you can hold your hand out and get a projection of the volume dial on your hand – it finds you and stays stuck to your hand,” he says. “If you bring back touch, it's much more like operating a real user interface, but it's flexible and invisible.”
Ultrahaptics' ultrasound technology integrates with all gesture-tracking tech, including , and camera.
A holographic Alexa?
Such advanced gesture technology could also be used to create haptic feedback interfaces for home appliances, such as ovens, but the goal is nothing short of the smart home at the speed of sound.
“With AR, 3D displays and 3D holograms coming out now, you can imagine a future where you have a 3D Alexa standing on the sideboard,” says carter. “And when there's something that's too tricky for talking it can pop up a holographic interface above the speaker and you can reach out and, using technology, touch out and feel it for finer interactions.”
Ultrahaptics are also likely to be used in , in cinemas, arcades and theme parks. Carter adds: “We can make you feel the crackle in you fingertips as you send force-lightning out of your hands, or as you cast a magic spell.”
If haptics will make how we control computers more natural and subtle, it’s advances in AI and natural language processing that will bring the most impressive informational features.
Nils Lenke, senior director of corporate research at says: “We are now working on automotive assistants that learn from the previous behaviour of the driver to direct them to their favourite cuisine when asking ‘find me a takeaway on my way home’,” says Nils Lenke, senior director of corporate research at .
Armed with some natural language processing they may be, but Alexa et al don't know much – and they certainly don't know you very well, which makes unscripted two-way conversations impossible. But this will eventually change.
“If we really want to communicate with bots in a more meaningful way they have to become smarter, more proactive and learn you as a user, your preferences, your behaviour, and start anticipating and suggesting things,” says Unit4's Staven.
“There will be a lot of forgiveness in the beginning if the bot doesn’t always get it right, as long as it improves and learns from your and its own behaviour.”
Empathy and meaningful relationships
The next stage is bots that can understand the nuances of human behaviour, such as humour, wit and sarcasm.
“Coupled with AI being given the chance to prove itself to a user, it will mean people beginning to trust AI and bots more,” says Matty Mariansky, co-founder of Doodle, which developed the AI scheduling assistant. Mariansky has noticed people beginning to treat bots as if they were people.
“We recently taught our bot to recognise ‘shut up’ as a command meaning ‘stop these reminders’, but when we told one user to say this to the bot she replied that she felt bad telling a chatbot to shut up when it isn’t his/her fault,” he says.
Empathy for software may seem strange at first, but given the technology why wouldn’t a designer include the powerful human emotion as part of a new product’s appeal?
While Siri and Alexa are famously faceless, a fleet of new Japanese ‘homebots’ such as , Lynx, Yumi, and put the face first. Essentially these are just user interfaces for a cloud-powered digital assistant that occasionally shows a cute face or expression on a touchscreen while speakers play childish voices, chirps or digital cooing.
Most of them also have cameras installed in their ‘heads’, which use face recognition technology so they can customise services to specific people.
Giving a bot its own face and human-like personality is all the rage, but is that a dead-end for the zero user interface?
“Maybe the younger generation would like something with more of a personal touch – if you can configure it yourself – but we need to think beyond Japanese comic-like avatars,” says Staven. “Because that is almost embarrassing.”
The majority of bots will be used by older people, and in the workplace, and a professional, business-like bot absolutely does not chirp like a happy cat.
What the zero UI will end up like is anyone's guess, but perhaps the most important feature is that it joins the dots between devices to become a kind of central nervous system.
“Assistants, bots, things, cars and systems will all be connected to the internet, but how will they all cooperate to help their human users?” asks Lenke of Nuance Communications.
Interoperability between many systems is where the future of the user interface lies, because there’s nothing worse than an artificially intelligent argument.