8 rules for designing a smart, caring and fun in–car virtual assistant
Jul 28, 2021 by Olaf Preissner
Jul 28, 2021 by Olaf Preissner
Virtual assistant services have come on leaps and bounds since the turn of the millennium. You can find smart speakers in almost every home in the Western world. In fact, most of us carry virtual assistants around 24/7 — every new mobile contains its own version. It’s no surprise then, that users increasingly expect the same options in their cars.
In their 2020 survey, Voice.ai recorded a 13+% increase of in-vehicle, virtual-assistant usage. A year before, Capgemini Research Institute predicted that by 2022, 95% of us will be using voice-based, virtual personal assistants as our main in–vehicle user interface. That said, both studies reported that, at the moment, we only use the technology for simple virtual assistant services, mainly:
Voice in-car use cases (source: Voice.ai, 2020)
With increasing use though, user expectations are growing. Consumers want improved capabilities and enhanced voice recognition; greater sophistication (an experience more akin to speaking to a person rather than a machine).
Our own survey (March 2020) defined the perfect virtual assistant as a proactive system with personality that delivers output based on situations and type of user — but how do you get that? Well, shaping the user interactions of an in–vehicle virtual assistant requires an enhanced design approach. The UX/UI designer needs to integrate traditional principles with new ones for advanced use cases because the interaction happens inside the vehicle cabin, while the user’s cognition and attention are shared with the driving task.
Every time we design the user interaction of an in–car virtual assistant, we apply eight design insights to ensure a smart, fun and caring outcome.
A virtual assistant is any software application that combines AI with advanced natural language processing (NLP) and comprehension to understand everyday language commands and complete the associated tasks for the user.
Here’s how it work
The primary element of NLP technology (recognition and synthesis) dates back to 1962. IBM presented a tool called Shoebox that could recognize 16 words and ten digits. Companies in various industries continued to develop NLP for specific applications (e.g., Nuance PC-based dictation software).
In 2011, Apple’s personal assistant — Siri — combined AI and voice technology. Now every big tech has launched a variant: Google with Google Now (2012), Microsoft with Cortana (2014) and Amazon with Alexa (2015).
In–car virtual assistants have been around since Honda and IBM ViaVoice first introduced the technology to an expectant automotive industry in 2004. Back then, virtual assistants only recognized and responded to a limited number of specific commands to complete simple tasks. However, with the advent of NLP, the voice interface began to recognize a larger vocabulary and more complex sentences.
Three years later, Nissan presented their Pivo 2 concept car at the Tokyo Motor Show. The main goal of the Pivo 2 assistant was to deliver fun and functionality, developing a unique relationship between car and driver on the daily commute.
Fast forward to 2019. Nio unveiled Nomi, an innovative virtual assistant which was intended to be more of an intermediary, creating a completely new type of relationship between brand and driver.
Now, the rush to realize and leverage this new relationship is a growing trend in the automotive industry, especially for innovation-oriented markets like China and Japan.
The latest AI–powered virtual assistants — Hey Mercedes in MBUX, for example — have taken user interaction to new levels of complexity and personalization.
With the technology to back it up and the design principles to guide us, it’s great to have the chance to redesign the in-car experience. We have the opportunity to introduce new features and use cases to satisfy expressed and unexpressed user needs. For instance, around 30% of people develop symptoms of travel sickness (5% heavily) on a trip. Voice–based virtual assistants can distract and entertain sufferers while limiting the conflict between visual and environmental systems.
New generations, particularly the tech-native Generation Alpha (children of Millennials), automatically feel at home with the virtual assistant’s new use cases which range from telling stories and entertaining passengers on a long journey, to more advanced personalization-based experiences.
Contact Olaf Preissner to find out more about designing caring and sophisticated virtual assistants that make life less stressful and more fun.
Defining the electromobility revolution with new terminology that’s user friendly and impactful