Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Home ›› Technology ›› Mobile User Experience Trends on the Horizon

Mobile User Experience Trends on the Horizon

by Marek Pawlowski
4 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

A look at several future trends of significance for UX practitioners.

Change is in the air, or perhaps more accurately, in the airwaves. It’s visible every time a child presses a finger to a laptop screen, expecting it to respond, and in business meetings where projectors are left unused in favor of the more intimate, shared visual experience of an iPad.

The majority of the world’s digital experiences now happen through mobile devices linked by wireless networks. It is this untethered medium that is defining future trends in user behavior, sweeping away the legacy of interaction methods established for fixed computing scenarios.

A child born today could grow up without ever needing to use a mouse, a physical keyboard, or any form of wired connection. Similarly, the overwhelming majority of Internet access in emerging economies is through mobile devices and most of these users will never know any other method.

The untethering of digital experiences has been predicted by specialists for some time. Indeed, there is a long history of over-estimating the short-term impact of mobile technology, but significantly under-estimating the long-term impact.

In the process of bringing together the semi-annual MEX events, I’ve spent time tracking the technology landscape in the mobile industry and behavioral traits among mobile users. This article looks at several future trends I expect to be of significance for UX practitioners as the balance of user expectations tilts ever further towards mobile scenarios.

Touch Breaks Down Barriers Between Physical and Digital

Firstly, there is a move from indirect to direct manipulation methods. Touchscreens are a more natural way to interact with the digital world, and are proliferating. Children are having their first digital experiences with touchscreens on their parents’ mobile devices, which are defining their future interface expectations.

There have already been stories of children trying to use the familiar pinch-to-zoom gesture on the physical Polaroids in family photo albums.

As more users interact with digital services through touch, the familiar “chrome” of UIs—buttons, icons and menus—will fade into the background. The content itself—be it document, photo or video—is becoming the new user interface, growing its share of screen real estate, dominating the aesthetic, and responding directly to the user’s fingertips.

SMUIs Enable Truly Social Computing

As users reach out to touch the digital world, another trend will emerge: simultaneous, multi-person user interfaces (SMUIs). These are a response to behavioral traits already exhibited by tablet users. The tablet form factor inspires a shared intimacy, where two or more users often try to interact with the screen at the same time.

SMUIs represent potentially the most significant generational change facing UX practitioners. They challenge the traditional convention governing the majority of digital interfaces—to design primarily for a single user interacting with a single device at any one time.

In contrast, SMUIs allow two or more users to interact with the same device at the same time. Although many touchscreens are technically capable of recognizing multiple fingers, there are still few products that allow for elegant, simultaneous interactions by multiple users.

SMUIs are ideal for scenarios such as a couple planning a vacation together, children challenging each other to a multiplayer game or a family organizing their photo album.

SMUIs enable truly social computing, where the participants are physically present to share the experience.

Balancing UX and Network Austerity

The growing influence of mobile also brings constraints. Devices are smaller and have finite power resources, and wireless networks deliver a less consistent connection and impose restrictions on how much data can be downloaded by each user.

It will be some years before the technology and economics align to allow a cellular Internet experience comparable to today’s fixed broadband. In the meantime, UX practitioners face an era of wireless network austerity. A balance must be struck between delivering the essentials of customer experience within limitations of wireless capacity.

There is a particularly difficult conflict to overcome. User tolerance of latency is lower on untethered devices, but wireless networks have slower connection speeds and lose connectivity more frequently.

Designers who combine existing visual skills with increased technical knowledge of networks and programming efficiency will be best placed to create good user experiences.

Single Device, Multiple Screens

An additional trend is the multiplication of screens controlled by individual mobile devices. This is developing along two paths. Firstly, the cost of displays and their relative power requirements is falling, enabling mobile devices to include more than one screen in a single physical product. The Nintendo DS and Toshiba Libretto are examples of this.

Secondly, it is becoming easier to abstract content to additional displays outside of the mobile device itself, connecting wirelessly to PCs, TVs and even wearable displays. Apple TV, for instance, can be controlled from an iOS mobile device, while Sony Ericsson has introduced a wearable LiveView accessory for its Android mobile devices.

Controlling multiple screens from a single device raises the possibility of experiences that combine multiple digital touchpoints to become greater than the sum of their parts.

This challenges practitioners to consider how UX is formed in the gaps between devices and to anticipate more frequent periods of partial user attention. Designers must also consider how investing in their own education and ability to understand the broader context of multi-screen user scenarios to enhance their ability to design effectively for the multi-screen future.

post authorMarek Pawlowski

Marek Pawlowski, Marek Pawlowski is the founder of MEX (https://pmn.co.uk/mex/). Since 1995, he has focused the MEX business on helping digital industries to develop better, more profitable products through improved understanding of user behaviour with mobile devices and wireless networks. MEX is best known for its events, research and consulting, which balance commercial, technical and user insights to define the future of mobile user experience. The next MEX event, ‘6 Pathways to the mobile UX horizon’, is in London on 30 November - 01 December 2010.

Tweet
Share
Post
Share
Email
Print

Related Articles

Discover the future of user interfaces with aiOS, an AI-powered operating system that promises seamless, intuitive experiences by integrating dynamic interfaces, interoperable apps, and context-aware functionality. Could this be the next big thing in tech?

Article by Kshitij Agrawal
The Next Big AI-UX Trend—It’s not Conversational UI
  • The article explores the concept of an AI-powered operating system (aiOS), emphasizing dynamic interfaces, interoperable apps, context-aware functionality, and the idea that all interactions can serve as inputs and outputs.
  • It envisions a future where AI simplifies user experiences by seamlessly integrating apps and data, making interactions more intuitive and efficient.
  • The article suggests that aiOS could revolutionize how we interact with technology, bringing a more cohesive and intelligent user experience.
Share:The Next Big AI-UX Trend—It’s not Conversational UI
5 min read

Discover how Flux.1, with its groundbreaking 12 billion parameters, sets a new benchmark in AI image generation. This article explores its advancements over Midjourney and Dall-E 3, showcasing its unmatched detail and prompt accuracy. Don’t miss out on seeing how this latest model redefines what’s possible in digital artistry!

Article by Jim Clyde Monge
Flux.1 is a Mind-Blowing Open-Weights AI Image Generator with 12B Parameters
  • This article examines Flux.1’s 12 billion parameters and its advancements over Midjourney and Dall-E 3. Highlights its superior image detail and prompt adherence.
  • The piece explores the shift of developers from Stability AI to Black Forest Labs and how this led to Flux.1. Analyzes the innovation impact.
  • It compares Flux.1 with Midjourney V6, Dall-E 3, and SD3 Ultra, focusing on visual quality, prompt coherence, and diversity.
  • The guide explains how to access Flux.1 via Replicate, HuggingFace, and Fal. Covers the different models—Pro, Dev, Schnell—and their uses.
  • The article investigates Flux.1’s capabilities in generating photorealistic and artistic images with examples of its realism and detailed rendering.
Share:Flux.1 is a Mind-Blowing Open-Weights AI Image Generator with 12B Parameters
5 min read

Is true consciousness in computers a possibility, or merely a fantasy? The article delves into the philosophical and scientific debates surrounding the nature of consciousness and its potential in AI. Explore why modern neuroscience and AI fall short of creating genuine awareness, the limits of current technology, and the profound philosophical questions that challenge our understanding of mind and machine. Discover why the pursuit of conscious machines might be more about myth than reality.

Article by Peter D'Autry
Why Computers Can’t Be Conscious
  • The article examines why computers, despite advancements, cannot achieve consciousness like humans. It challenges the assumption that mimicking human behavior equates to genuine consciousness.
  • It critiques the reductionist approach of equating neural activity with consciousness and argues that the “hard problem” of consciousness remains unsolved. The piece also discusses the limitations of both neuroscience and AI in addressing this problem.
  • The article disputes the notion that increasing complexity in AI will lead to consciousness, highlighting that understanding and experience cannot be solely derived from computational processes.
  • It emphasizes the importance of physical interaction and the lived experience in consciousness, arguing that AI lacks the embodied context necessary for genuine understanding and consciousness.
Share:Why Computers Can’t Be Conscious
18 min read

Tell us about you. Enroll in the course.

    This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and