Information architecture (IA) is all about making the complex clear; helping users to understand where they are, what’s around and how to find what they’re looking for. IA helps to make sense of very complex structures and offer users simplicity and efficiency. Done well, IA creates the most linear and natural route, meaning users don’t have to think about their journey and can instead more fully engage with the product, service or brand. It’s an essential part of the user experience; well thought-out organisation, structure and labelling enables us to easily navigate vast amounts of content.
The origins of IA can be traced back to the 1970s; way before the rise of the internet or the term ‘user experience’ was coined, and it’s main principles are rooted in library science, architecture and cognitive psychology. Despite its traditional roots, Information Architecture, like the technology it serves, is constantly evolving. Take for example, the shift from desktop to mobile; this has had a big impact on where, when and how we consume content, affecting our mindset and priorities. IA has adapted to ensure content is still easy to access on a much smaller screen and with far greater distractions. As technological developments continue to offer new ways to engage with content; through the likes of smart wearables and smart speakers, IA must evolve to continue to offer the most optimal way of accessing content.
One area of growing importance that has big implications on IA is voice command designs. A Voice User Interface or VUI, enables a user to interact with a system using voice or speech commands. The most commonly known examples of VUIs are Siri, Google Assistant and Alexa.
Over the last five years I have maintained an element of scepticism as to the benefits actually offered by virtual assistants like Alexa. The ability to turn lights and music on and off without moving, seemed to me, to simply perpetuate laziness. My perspective was altered somewhat when my uncle got one. My uncle has advanced Multiple Sclerosis (MS) and for a few years his smart speaker gave him a small but incredibly powerful element of independence; enabling him to turn his TV and radio on and off and call family without having to wait for a carer to do it. This really opened my eyes to the potential benefits of VUI in terms of accessibility.
Further research highlights just how necessary improvements in web browsing accessibility are. In August 2019, the results of a study conducted by Nucleus Research revealed that ‘two-thirds of the Internet transactions initiated by people with vision impairments end in abandonment because the websites they visit aren’t accessible enough.’ As VUIs become more commonly used and the physical and digital worlds merge, websites will be forced to adapt in order to remain relevant. This should have a positive effect in universally increasing accessibility.
Visual impairments and mobility issues can affect us all, whether permanent or temporary. Having the option to fully utilise voice command will open up a whole new realm of opportunities; offering alternative ways to interact with the world around us and negating the need to spend every waking minute glued to a screen.
Searching via voice command offers convenience and speed but this is often just the first step of a task and you then need to switch back to interacting with a Graphical User Interface (GUI). It will be interesting to see how a VUI can be fully integrated into e-commerce sites and whether this will contribute to a more efficient and engaging shopping experience. Utilising a VUI alongside the more traditional GUI could enable a less cluttered and more personalised home screen with a focus on inspiring the shopper, whilst more specific searches could be conducted vocally.
The most obvious benefits of a VUI is that it enables users to interact with a product without looking at it or touching it; the focus of their attention can be elsewhere. The possibilities seem particularly pertinent in the current pandemic where reduced contact and added distance are necessary.
Within a hospital setting, medical practitioners could focus their full attention on the main task at hand; treating patients. Reducing the need to interact with screens could also mean not having to wash hands as frequently.
Many community hubs like doctors surgeries, libraries and banks currently rely on customers interacting with touch screens. It will be interesting to see how these services could be adapted to offer a frictionless journey through the use of VUI, although the security implications would need careful consideration.
Designing for VUI presents many challenges, especially as we are only at the very beginning of what is possible and best practises are still being established. As designers there is so much more to consider when developing this kind of experience and it cannot be approached in the same way as you would a graphical user interface.
A VUI needs to let users know what options they have at every stage of the interaction as well as providing clear feedback on what the system is currently doing. Consideration must also be given to the quantity of information the system outputs at any one time. Most of us are very limited as to the amount of instruction we can easily recall and this is reduced further when our attention is elsewhere.
Users tend to associate voice interactions with human conversation rather than with technology and are often unsure as to the level of complexity that the system can understand. Limitations need to be established upfront in order to set realistic user expectations. As humans the ways in which we phrase questions and responses varies massively, even within the same language and culture. To be successful VUI’s need to be able to ‘train’ users so that they quickly come to understand what type of voice commands they can use and what type of interactions they can perform.
From a design perspective further complications arise from vastly differing accents, colloquial slang and background noise. These issues highlight the necessity to build better recognisant engines and increase their capacity to identify different languages and tone of voices.
Many of us have experienced the sheer frustration of older VUIs which didn’t work as we expected or repeatedly failed to recognise our commands. I for one rarely use voice commands when driving; I simply don’t trust the system in my car to interpret my requests correctly. After repeatedly attempting to call my partner using a voice command I once accidentally left a rather aggressive voicemail of expletives and commands to ‘end call’ for a friend’s mum whom I hadn’t spoken to in over a year.
In this situation the sheer annoyance of repeated attempts and constant incorrect confirmations were a far bigger distraction from driving than simply pressing a few buttons. When products ask users to change or adapt their behaviour trust becomes very important and this can only be built if both the VUI and user are able to learn from each interaction to continually improve accuracy and ease of communication.
The recent pandemic has opened our eyes to how tied we still are to traditional routines and outdated ways of doing things. It has opened up dialogues on what is meaningful and necessary and caused us to question what we want our future to look like. Whilst routine and physical interactions will always be important, necessity has fast tracked a lot of positive change that enables more flexibility to do things our own way. The current need for frictionless interactions will be beneficial in terms of accelerating more efficient and accessible processes that free up time and focus for what really matters. As technological advancements continue to offer more applications and opportunities for integrating increasingly sophisticated voice user interfaces I believe they will become an essential part of our everyday lives. As designers we need to be prepared for this and it will be interesting to see how Information Architecture evolves to develop best practices specifically for this area.