Change is in the air, or perhaps more accurately, in the airwaves. It's visible every time a child presses a finger to a laptop screen, expecting it to respond, and in business meetings where projectors are left unused in favor of the more intimate, shared visual experience of an iPad.

The majority of the world's digital experiences now happen through mobile devices linked by wireless networks. It is this untethered medium that is defining future trends in user behavior, sweeping away the legacy of interaction methods established for fixed computing scenarios.

A child born today could grow up without ever needing to use a mouse, a physical keyboard, or any form of wired connection. Similarly, the overwhelming majority of Internet access in emerging economies is through mobile devices and most of these users will never know any other method.

The untethering of digital experiences has been predicted by specialists for some time. Indeed, there is a long history of over-estimating the short-term impact of mobile technology, but significantly under-estimating the long-term impact.

In the process of bringing together the semi-annual MEX events, I've spent time tracking the technology landscape in the mobile industry and behavioral traits among mobile users. This article looks at several future trends I expect to be of significance for UX practitioners as the balance of user expectations tilts ever further towards mobile scenarios.

Touch Breaks Down Barriers Between Physical and Digital

Firstly, there is a move from indirect to direct manipulation methods. Touchscreens are a more natural way to interact with the digital world, and are proliferating. Children are having their first digital experiences with touchscreens on their parents' mobile devices, which are defining their future interface expectations.

There have already been stories of children trying to use the familiar pinch-to-zoom gesture on the physical Polaroids in family photo albums.

As more users interact with digital services through touch, the familiar "chrome" of UIs—buttons, icons and menus—will fade into the background. The content itself—be it document, photo or video—is becoming the new user interface, growing its share of screen real estate, dominating the aesthetic, and responding directly to the user's fingertips.

SMUIs Enable Truly Social Computing

As users reach out to touch the digital world, another trend will emerge: simultaneous, multi-person user interfaces (SMUIs). These are a response to behavioral traits already exhibited by tablet users. The tablet form factor inspires a shared intimacy, where two or more users often try to interact with the screen at the same time.

SMUIs represent potentially the most significant generational change facing UX practitioners. They challenge the traditional convention governing the majority of digital interfaces—to design primarily for a single user interacting with a single device at any one time.

In contrast, SMUIs allow two or more users to interact with the same device at the same time. Although many touchscreens are technically capable of recognizing multiple fingers, there are still few products that allow for elegant, simultaneous interactions by multiple users.

SMUIs are ideal for scenarios such as a couple planning a vacation together, children challenging each other to a multiplayer game or a family organizing their photo album.

SMUIs enable truly social computing, where the participants are physically present to share the experience.

Balancing UX and Network Austerity

The growing influence of mobile also brings constraints. Devices are smaller and have finite power resources, and wireless networks deliver a less consistent connection and impose restrictions on how much data can be downloaded by each user.

It will be some years before the technology and economics align to allow a cellular Internet experience comparable to today's fixed broadband. In the meantime, UX practitioners face an era of wireless network austerity. A balance must be struck between delivering the essentials of customer experience within limitations of wireless capacity.

There is a particularly difficult conflict to overcome. User tolerance of latency is lower on untethered devices, but wireless networks have slower connection speeds and lose connectivity more frequently.

Designers who combine existing visual skills with increased technical knowledge of networks and programming efficiency will be best placed to create good user experiences.

Single Device, Multiple Screens

An additional trend is the multiplication of screens controlled by individual mobile devices. This is developing along two paths. Firstly, the cost of displays and their relative power requirements is falling, enabling mobile devices to include more than one screen in a single physical product. The Nintendo DS and Toshiba Libretto are examples of this.

Secondly, it is becoming easier to abstract content to additional displays outside of the mobile device itself, connecting wirelessly to PCs, TVs and even wearable displays. Apple TV, for instance, can be controlled from an iOS mobile device, while Sony Ericsson has introduced a wearable LiveView accessory for its Android mobile devices.

Controlling multiple screens from a single device raises the possibility of experiences that combine multiple digital touchpoints to become greater than the sum of their parts.

This challenges practitioners to consider how UX is formed in the gaps between devices and to anticipate more frequent periods of partial user attention. Designers must also consider how investing in their own education and ability to understand the broader context of multi-screen user scenarios to enhance their ability to design effectively for the multi-screen future.


Over the years, we've written a variety of articles on the future of user interfaces. As we watch history go by, though, it strikes me more and more the effect of inertia. Things do not change as fast as 'prophets' think they will and of course they never go quite in the direction we hypothesize. And change interacts with other changes in ways we could never anticipate. Great article, nonetheless.

Magic Trackpad is the greatest example for me that user experience and principles of interaction are changing for computing scenarios. Gestural interfaces realization is not a touchscreen, but a touch as direct manipulation.

It will be interesting to witness the integration of both touch screen and gesture controls with more verbal controls in the next wave of mobile interaction.

@Giles Smith,Louise Hewitt
I agree that observing children is a great method. But children rather manipulate physical stuff instead of acting on a surface. The "push", "pull", "throw" we have in touch interfaces now are just vague metaphors for this physical-world manipulating behaviour. So childrens behaviour would rather support tangible interfaces.

What is interesting is that the gaming platforms are not using touch screens at all. They are already providing interfaces for multiple people that rely on gestures. Its safe to say that both touch screens and gesture controls are integral steps on the way to providing a better user interface to the technology.

What will be interesting is the point where physical interactions, gestures and verbal controls converge into a seamless user experience and what the right set of interactions will be at that point.

If we are going to define a standard set of interactions let's make it comprehensive and build in the ability for the standard to update quickly so that it doesn't get left behind as the technology improves and trends change.

Interesting article, though (perhaps with the exception of pinch-zoom) how much of this is truly revolutionary? Many of the supposed 'innovations' now being brought to fruition in touch interfaces have clear parallels in classic desktop UIs.

Perhaps these new touch interfaces only seem so revolutionary because the currently dominant UI framework, HTML, is so backward. No drag and drop, no right-click contextual menus, no keyboard interaction. Just clickable links, some of which are styled as buttons.

Agree with much, but most of all with the implications that children (especially the very young whose behaviour can be more reliably considered 'natural') are an excellent resource when creating new paradigms for interaction with touch. I secretly test all my interface concepts with my kids - the only problem is now, at 3 and 6, they are getting too experienced!

I think that you can create any kind of interface within a multitouch screen enviroment, or even create a very nice standar interface used on a multitocuh device so, I think all the ideas exposed here are great.

@Andrew Niepraschk I agree with the necessity of standard gesture set. Perhaps user demands/ complaints will force companies to adhere to one set of gestures (the most dominant, widely accepted (Apple)).

@Dinesh Kaushal Gestures without physical screens would be great. This would allow users to control devices from long distances. Imagine eye, finger and hand tracking!
Accessibility for blind/ disabled users will always remain important. I hope designers and coders don't overlook this in the excitement of new technologies.

The trends suggest that the increased usage of touch screens is going to be there whether we like it or not. Another usability element would soon be gestures without screens. portable projectors are already available, and with time we might be just using any surface to read / write our documents.

All this leads to another problem, i.e. usability of such devices for persons who are blind / disabled. It is very difficult for a blind user to know what is there on a screen, so using a touch screen or a gesture becomes a big challenge for them. A touch screen or a gesture based interface might do an action, creating problem for a blind user.

We will have to create standard ways for blind / disabled persons to use such interfaces.

"Single device, multiple screens" sounds very exciting, now iPad has seen a lot of success but imagine a large screen device that fits in your pocket

There is a huge problem that must be tackled and destroyed before any of this can come to fruition. The bad thing is, rather than getting better so far all vested parties seem more interested in making this problem worse. I am of course talking about the lack of a standard gesture set.

Consider if you will, the child (or even adult) learns to pinch to zoom on one device, then goes to a new device and pinches only to have the application close, and on a third device the pinch minimizes. How utterly frustrating! And instead of working together, the different players like HP, Apple, and Google are instead suing or threatening to sue anyone who uses a similar gesture to task mapping as they are using on their devices. They have the typical business mentality of “I made it, if you want to use it you can’t, at least not without paying me tons of money”. When in reality if Apple or Google would instead take the initiative to get with HP and align their gesture sets and then get with every other non smart phone gesture using device and get them on board with the same exact gesture set, they would eventually completely obliterate the other “non-standard” smart phone.

Until a standard is developed and adhered to across the board, developing invisible GUIs and installing them on a device is just asking for that device to be forgotten when a standard eventually does come around. Since a standard by my estimation is at least 2 years out, probably more than 5 and possibly as much as a decade, I will not be buying any hidden GUI devices any time soon.

Thanks for your comments Jan, and Giles.

I agree there are usability problems with touch-based interfaces.  However, the majority of users I observe continue to find touch preferable overall and take to it more naturally than in-direct manipulation methods.  As touchscreens proliferate, this method of interaction will become the expectation among the next generation of users, even as they struggle with some of the usability problems highlighted in Norman's article.

There is also an economic driver at work here: abstracting interface controls fully into touch-based software lowers costs and risks for device manufacturers.  It allows interfaces to be replicated and adapted more easily across device portfolios.

An interesting debate is emerging (we'll be looking at this during the next MEX event) around whether there are specific interactions which justify the investment in dedicated hardware controls.

For instance, you could argue that devices which are used for extensive emailing could justify the cost of including a dedicated scroll mechanism and keyboard.  Similarly, a manufacturer may choose to make a particular hardware interface into a brand statement (as Apple did a few years ago with its touch wheel).  I could imagine, say, premium mobile devices with a music focus incorporating beautifully tooled hardware volume controls or playback buttons.  The inclusion of the hardware control becomes a key part of the user experience and a differentiator from other products.

@Jan Isn't one of the first things we do when we are born is to learn how things respond when we touch them? Children try to push, pull, rub, stroke everything in an effort to understand how an object works and responds.

Maybe gestural interfaces seem like a step backwards in usability to people of our generations who have been conditioned to interact with devices using buttons and pointing devices?

I will read the article you suggest now...

Its pretty common to state touchscreens have a great usability and that the GUI fades away in order to give content more space. The second part of this is right, but interaction using hidden interfaces trigged by long-tap or "natural" gestures (that all need to be learned) is pretty bad from the point of usability.

Read for an assesment of the topic.

Even before Don Normans Articles on the topic I had no clue where the great usability is supposed to come from. Probably from advertisements.