Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Home ›› Business Value and ROI ›› 6 Key Questions to Guide International UX Research ›› Improving Mobile Interfaces with Rich-Touch Interactions

Improving Mobile Interfaces with Rich-Touch Interactions

by Chris Harrison
6 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

For mobile devices to maximize their computing potential, we need touch-based interfaces that utilize the complex input our fingers and hands can express.

Today’s mobile devices are incredibly sophisticated computers with multi-gigahertz, multi-core, hardware accelerated graphics, network connectivity, and many gigabytes of high-speed memory. Indeed, the smartphones we carry in our pockets today would have been deemed supercomputers just twenty years ago.

Yet, the fundamental usability issue with mobile devices is apparent to anyone who has used one: they are small. Achieving mobility through miniaturization has been both their greatest success and most significant shortcoming.

Developments in human-computer interfaces have significantly trailed behind the tremendous advances in electronics. As such, we do not use our smartphones and tablets like our laptops or desktop computers. This issue is particularly acute with wearable devices, which must be even smaller in order to be unobtrusive.

Within the last year, the first standalone consumer smartwatches have emerged—a tremendous milestone in the history of computing—but the industry is still struggling with applications for this platform. At present, the most compelling uses are notifications and heath tracking—a far cry from their true potential.

Developing and deploying a new interaction paradigm is high risk; it is easier and more profitable to quickly follow then it is to lead such a revolution. Few companies have survived the quest to lead a revolution. This has fostered an industry where players keenly watch one another and are quick to sue, and where there is little public innovation. Comparing a 2007 iPhone 1 to a 2014 iPhone 5s, it is easy to see that while computers have continued to get faster and better, the core interactive experience really hasn’t evolved at all in nearly a decade.

The Rich-Touch Revolution

“Each morning begins with a ritual dash through our own private obstacle course—objects to be opened or closed, lifted or pushed, twisted or turned, pulled, twiddled, or tied, and some sort of breakfast to be peeled or unwrapped, toasted, brewed, boiled, or fried. The hands move so ably over this terrain that we think nothing of the accomplishment.”—Frank Wilson (The Hand)

Touch interaction has been a significant boon to mobile devices, by enabling direct manipulation interfaces and allowing more of the device to be dedicated to interaction. However, in the seven years since multi-touch devices went mainstream, primarily with the release of the iPhone, the core user experience has evolved little.

Contemporary touch gestures rely on poking screens with different numbers of fingers: one-finger tap, two-finger pinch, three-finger swipe and so on. For example, a “right click” can be triggered with a two-fingered tap. On some platforms, moving the cursor vs. scrolling is achieved with one or two finger translations respectively. On some Apple products, four-finger swipes allow users to switch between desktops or applications. Other combinations of finger gestures exist, but they generally share one commonality: the number of fingers parameterizes the action.

This should be a red flag: the number of digits employed does not characterize actions we perform in the real world. For example, I do not two-finger drink my coffee, or three-finger sign my name—it is simply not a human-centered dimension, nor is it particularly expressive! Instead, we change the mode of hands in the world (and in turn, the tools we wield) by varying the configuration of our hands and the forces that our fingers apply. Indeed, the human hand is incredible, yet we boil this input down to 2-D location on today’s touch devices.

The human hand is incredible, yet we boil input down to 2-D location on today’s touch devices

Fortunately, with good technology and design, we can elevate touch interaction to new heights. This has recently led to new area of research—one that looks beyond multi-touch and aims to create a new category of “rich-touch” interactions. Whereas multi-touch was all about counting the number of fingers on the screen (hence the “multi”), rich-touch aims to digitize the complex dimensions of input our fingers and hands can express—things like sheer force, pressure, grasp pose, part of finger, ownership of said finger and so on. These are all the rich dimensions of touch that make interacting in the real world powerful and fluid.

The Early Days of Rich-Touch Interaction

Initial research has already proven successful. One such technology, developed by my team when we were graduate students at Carnegie Mellon University’s Human-Computer Interaction Institute, is FingerSense.

The technology uses acoustic sensing and real-time classification to allow touchscreens to not only know where a user it touching, but also how they are touching—for example, with the fingertip, knuckle, or nail. The technology is currently being developed for inclusion in upcoming smartphone models to bring traditional “right-click” style functions into the mix, among many other features.

touch using finger

Finger

touch using knuckle

Knuckle

touch using nail

Nail

Another illustrative project is TouchTools, which draws on users’ familiarity and motor skills with physical tools from the real world. Specifically, users replicate a tool’s corresponding real-world grasp and press it to the screen as though it was physically present. The system recognizes this pose and instantiates the virtual tool as if it was being grasped at that position—for example, a dry erase marker or a camera. Users can then translate, rotate, and otherwise manipulate the tool as they would its physical counterpart. For example, a marker can be moved to draw, and a camera’s shutter button can be pressed to take a photograph.

In the same way as using our hands in the real world, both FingerSense and TouchTools provide fast and fluid mode switching, which is generally cumbersome in today’s interactive environments.

Contemporary applications often expose a button or toolbar that allows users to toggle between modes (e.g., pointer, pen, eraser modes) or require use of a special physical tool, such as a stylus. FingerSense and TouchTools can utilize the natural modality of our hands, rendering these accessories superfluous.

Learning from Human-Computer Interaction

For the past half-century, we’ve believed that the manifestation of tools in computing environments means providing a toolbar to users (like those seen in illustrating programs) and, in general, having them click buttons to switch modes. However, this is incredibly simplistic, does not scale well to small device sizes, and requires constant mode switching. For example, on touchscreens, the inability to disambiguate between scrolling and selection has made something as commonplace as copy and paste a truly awkward dance of the fingers.

Instead, computers should utilize the natural modality and power of our fingers and hands to provide powerful and intuitive mode switching. If we are successful, the era of poking our fingers at screens will end feeling rather archaic. Instead, we will have interactive devices that leverage the full capabilities of our hands, matching the richness of our manipulations in the real world.

Combined with the fact that the digital world allows us to escape many mundane physical limitations (e.g., items cannot disappear, or rewind in time), it seems likely we can craft interactive experiences that exceed the ability of our hands in the real world for the first time. For example, today I can’t sculpt virtual clay nearly as well as I can real clay with my bare hands. However, if we can match the capability though superior input technologies, it is inevitable that we will exceed those capabilities, which is the true promise of computing.

In order to fully realize the full potential of computing on the go, we must continue to innovate powerful and natural interactions between humans and mobile computers. This entails the creation of both novel sensing technologies and interaction techniques. Put simply: we either need to make better use of our fingers in the same small space, or give them more space to work within.

Image of elegant hands courtesy Shutterstock.

post authorChris Harrison

Chris Harrison
This user does not have bio yet.

Tweet
Share
Post
Share
Email
Print

Related Articles

Is true consciousness in computers a possibility, or merely a fantasy? The article delves into the philosophical and scientific debates surrounding the nature of consciousness and its potential in AI. Explore why modern neuroscience and AI fall short of creating genuine awareness, the limits of current technology, and the profound philosophical questions that challenge our understanding of mind and machine. Discover why the pursuit of conscious machines might be more about myth than reality.

Article by Peter D'Autry
Why Computers Can’t Be Conscious
  • The article examines why computers, despite advancements, cannot achieve consciousness like humans. It challenges the assumption that mimicking human behavior equates to genuine consciousness.
  • It critiques the reductionist approach of equating neural activity with consciousness and argues that the “hard problem” of consciousness remains unsolved. The piece also discusses the limitations of both neuroscience and AI in addressing this problem.
  • The article disputes the notion that increasing complexity in AI will lead to consciousness, highlighting that understanding and experience cannot be solely derived from computational processes.
  • It emphasizes the importance of physical interaction and the lived experience in consciousness, arguing that AI lacks the embodied context necessary for genuine understanding and consciousness.
Share:Why Computers Can’t Be Conscious
18 min read

AI is transforming financial inclusion for rural entrepreneurs by analyzing alternative data and automating community lending. Learn how these advancements open new doors for the unbanked and empower local businesses.

Article by Thasya Ingriany
AI for the Unbanked: How Technology Can Empower Rural Entrepreneurs
  • The article explores how AI can enhance financial systems for the unbanked by using alternative data to create accessible, user-friendly credit profiles for rural entrepreneurs.
  • It analyzes how AI can automate group lending practices, improve financial inclusion, and support rural entrepreneurs by strengthening community-driven financial networks like “gotong royong”.
Share:AI for the Unbanked: How Technology Can Empower Rural Entrepreneurs
5 min read

Discover the hidden costs of AI-driven connectivity, from environmental impacts to privacy risks. Explore how our increasing reliance on AI is reshaping personal relationships and raising ethical challenges in the digital age.

Article by Louis Byrd
The Hidden Cost of Being Connected in the Age of AI
  • The article discusses the hidden costs of AI-driven connectivity, focusing on its environmental and energy demands.
  • It examines how increased connectivity exposes users to privacy risks and weakens personal relationships.
  • The article also highlights the need for ethical considerations to ensure responsible AI development and usage.
Share:The Hidden Cost of Being Connected in the Age of AI
9 min read

Tell us about you. Enroll in the course.

    This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and