Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Home ›› Business Value and ROI ›› 6 Key Questions to Guide International UX Research ›› Leveraging Eye Tracking to Create an Engaging User Experience

Leveraging Eye Tracking to Create an Engaging User Experience

by Frank Guo
6 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

Combining eye-tracking data with conventional usability studies can give you unparalleled insight into how to better guide user attention and refine their experience.

Eye tracking is a very powerful research technique that can help you design engaging experiences by illuminating a pathway for capturing user attention. It’s also a misunderstood technique. Many are lured by the “sexiness” associated with heatmaps and gaze plots but fail to implement the findings properly, and therefore get lots of data but glean little actionable insight. Others avoid eye tracking altogether because of perceived complexity of the technique.

Having set up eBay’s eye tracking lab and conducted extensive eye-tracking research to improve advertising, homepages, content strategies, and visual design, I’ve developed some guidelines and trained many UX professionals in applying the technique. Based on that experience, here’s a quick how-to guide for eye tracking.

What Is Eye tracking?

Simply put, eye tracking is a technique to help us understand where users’ visual attention goes. The results can inform us on how to write content to attract attention, how to make information more discoverable, and how to engage and excite users. Below are metrics typically collected in eye tracking research:

  • Gaze plots (the journey users’ eyes make to reach a target)
  • Time needed to find a target
  • Fixation points (what screen regions capture user attention and for how long)
  • Heatmaps (the aggregation of how much attention a group of users pay to different regions of the user interface)

The screens below show some examples of what eye-tracking data looks like:

heatmaps on eBay

Example 1

Diagram of Naked Innovation design process

Example 2

Diagram of Naked Innovation design process

Example 3

Why Eye tracking?

We do usability testing all the time, so many of you might ask: What can eye tracking offer that can’t be achieved through conventional usability testing? Well, for starters, usability studies can’t tell you how users spontaneously pay attention. Why?

Reason 1. You can’t directly observe what people are looking at

In a traditional usability study, you rely on two key techniques: observing user behavior and asking questions. For user behaviors that are visual in nature, like looking at the screen without typing or using the mouse, you won’t know what people are paying attention to.

Reason 2. People cannot tell you what they look at

What about just asking users what attracts their attention? Problem there is, people are very, very bad at remembering what they look at. In my eye tracking studies, what people tell me that they looked at is often different from what they actually looked at. This is because visual attention is a very spontaneous and nearly subconscious mental process.

In the example below, a user looked at the ads at the bottom of the page and later had no recollection of ever seeing them. This shows that much of visual attention is unknown to the user.

Diagram of Naked Innovation design process

Reason 3. Thinking aloud disrupts visual attention

Conventional usability studies depend heavily on the so-called “think-aloud protocol,” asking users to tell us whatever that passes through their minds as they perform tasks. However, thinking aloud is not natural—you don’t talk aloud when reading a webpage in real life, do you? Asking users to think aloud inevitably interferes with the way they naturally look at things.

Given these limitations of conventional usability studies, eye tracking emerges as the only way to fully understand users’ visual attention.

Areas of Application

Below are areas specifically suited for eye-tracking research. All of them are closely related to attracting and guiding user attention:

  • Homepages
  • Landing pages
  • Search
  • Advertising
  • Marketing promos
  • Reading content
  • Navigation
  • Shopping (looking for items to buy and checkout)

In other words, all areas of user experience where visual attention is heavily involved—where you can’t know what’s going on by just observing screen activities—can benefit from eye tracking research.

Eye Tracking + Conventional Usability Study = Complete Insight

Below is an approach I have used in most of my eye-tracking research. Typically, instead of running a stand-alone eye-tracking session, I include an eye-tracking component as the first part of the study and a conventional usability study component as the second part of the study. To avoid bias and collect clean eye-tracking data without the influence of moderation and learning, the eye-tracking portion should always precede the conventional usability testing part.

Part 1: Eye tracking and uninterrupted tasks

With eye tracker at work, we ask users to perform tasks with no moderation, which means no think-aloud, no questions from the moderator, and no assistance provided if they get stuck.

Purpose: We want to get unbiased, clean visual attention data by letting users perform tasks in the most natural way possible. This gives us a clear picture of what drives user attention.

Key to success: Make sure you set up sensible, realistic tasks, using a live site/mobile app if possible, asking users to do activities they would naturally do, and avoiding interrupting them in any way when they do the activities.

Part 2: Conventional usability testing, repeating the tasks above

Ask users to perform the same tasks as those in Part 1, but with different stimuli (e.g., asking users to buy a different item, reading a different article, or creating a difference invoice, than in Part 1), this time without eye tracking, we ask the users to do activities typical of a usability study—in performing tasks, they should think aloud and we can lend help if they encounter problems; we can choose to probe when observing interesting behavior, and we can ask them to elaborate what made them say “wow” or “sucks.”

Purpose: By asking users to perform similar tasks as in Part 1 with additional insight in the form of user comments, we can understand “why” users look at the screen in the ways captured by eye tracking.

Key to success: Given that this part is designed to get in-depth insight into why users behave in certain ways, make sure to ask probing questions based on your research questions and observed user hesitation and confusion.

How to Interpret Data?

There are various ways to interpret eye-tracking information. Conventionally, eye-tracking data is analyzed quantitatively by aggregating data across a large sample of users for each task. The familliar heatmap from many eye tracking studies is an example of this kind of data. The quantitative data can help us understand how much attention various screen regions attract user attention, but do not lend us insight about the process of how users discover information on the page.

Eye tracking emerges as the only way to fully understand users’ visual attention

On the other hand, eye-tracking data can be analyzed qualitatively, one user at a time, on a task-by-task basis, much like how we analyze data for ethnographic research and one-on-one interviews. The qualitative data, when cross-referenced with the think-aloud protocol and observation of behavior from Part 2 of the procedure outlined above, can provide rich insight into users’ visual attention processes.

Case Study

In one of the studies I conducted, the eye movement of a user suggested that her eyes travelled long distances, jumping from the top of the screen to the bottom, ignoring all of the page content in between. What caused this behavior?

To understand the reason, it was not enough to look at the eye tracking data alone. From her verbal comments collected in Part 2 of the testing, I learned that she explored this web page by trying to gauge how much content there was on the page before looking at any particular screen element, which was why she looked at the top of the screen and then immediately jumped to the bottom. In so doing she got a sense of what was there on the screen.

That feedback, combined with the eye-tracking data, gave me a clear understanding of what she was trying to do when exposed to the page the first time. The same visual search pattern also emerged from other users in the study. By synthesizing eye tracking and think-aloud protocol data across just 10 users, we got a very deep understanding of how to better guide user attention, which drove a successful re-design of the UI.

Conclusion

Eye tracking is a powerful yet often misunderstood UX research technique. When applied correctly, through careful planning and identifying the right areas of application, it generates deep insight into how users pay attention to the UI. The best way to apply eye tracking is to use it as an add-on to conventional usability studies and combine qualitative and quantitative analysis to interpret eye tracking data. Through this holistic approach, you’ll get actionable insight into how to engage users through effective UX design.

 

Image of futuristic eye courtesy Shutterstock.

post authorFrank Guo

Frank Guo

Frank Guo provides UX strategy and design consulting through his firm, UX Strategized LLC, helping leading companies like eBay, PayPal, Yahoo!, StubHub, and IMVU improve user experience, usability, product strategy, and digital marketing. He has published 15+ professional papers and co-authored a book chapter related to user experience.

Tweet
Share
Post
Share
Email
Print

Related Articles

Curious about the next frontier in AI design? Discover how AI can go beyond chatbots to create seamless, context-aware interactions that anticipate user needs. Dive into the future of AI in UX design with this insightful article!

Article by Maximillian Piras
When Words Cannot Describe: Designing For AI Beyond Conversational Interfaces
  • The article explores the future of AI design, moving beyond simple chatbots to more sophisticated, integrated systems.
  • It argues that while conversational interfaces have been the focus, the potential for AI lies in creating seamless, contextual interactions across different platforms and devices.
  • The piece highlights the importance of understanding user intent and context, advocating for AI systems that can anticipate needs and provide personalized experiences.
Share:When Words Cannot Describe: Designing For AI Beyond Conversational Interfaces
21 min read

Discover how Flux.1, with its groundbreaking 12 billion parameters, sets a new benchmark in AI image generation. This article explores its advancements over Midjourney and Dall-E 3, showcasing its unmatched detail and prompt accuracy. Don’t miss out on seeing how this latest model redefines what’s possible in digital artistry!

Article by Jim Clyde Monge
Flux.1 is a Mind-Blowing Open-Weights AI Image Generator with 12B Parameters
  • This article examines Flux.1’s 12 billion parameters and its advancements over Midjourney and Dall-E 3. Highlights its superior image detail and prompt adherence.
  • The piece explores the shift of developers from Stability AI to Black Forest Labs and how this led to Flux.1. Analyzes the innovation impact.
  • It compares Flux.1 with Midjourney V6, Dall-E 3, and SD3 Ultra, focusing on visual quality, prompt coherence, and diversity.
  • The guide explains how to access Flux.1 via Replicate, HuggingFace, and Fal. Covers the different models—Pro, Dev, Schnell—and their uses.
  • The article investigates Flux.1’s capabilities in generating photorealistic and artistic images with examples of its realism and detailed rendering.
Share:Flux.1 is a Mind-Blowing Open-Weights AI Image Generator with 12B Parameters
5 min read

Is true consciousness in computers a possibility, or merely a fantasy? The article delves into the philosophical and scientific debates surrounding the nature of consciousness and its potential in AI. Explore why modern neuroscience and AI fall short of creating genuine awareness, the limits of current technology, and the profound philosophical questions that challenge our understanding of mind and machine. Discover why the pursuit of conscious machines might be more about myth than reality.

Article by Peter D'Autry
Why Computers Can’t Be Conscious
  • The article examines why computers, despite advancements, cannot achieve consciousness like humans. It challenges the assumption that mimicking human behavior equates to genuine consciousness.
  • It critiques the reductionist approach of equating neural activity with consciousness and argues that the “hard problem” of consciousness remains unsolved. The piece also discusses the limitations of both neuroscience and AI in addressing this problem.
  • The article disputes the notion that increasing complexity in AI will lead to consciousness, highlighting that understanding and experience cannot be solely derived from computational processes.
  • It emphasizes the importance of physical interaction and the lived experience in consciousness, arguing that AI lacks the embodied context necessary for genuine understanding and consciousness.
Share:Why Computers Can’t Be Conscious
18 min read

Tell us about you. Enroll in the course.

    This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and