Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Get exclusive access to thought-provoking articles, bonus podcast content, and cutting-edge whitepapers. Become a member of the UX Magazine community today!

Home ›› Mobile UX/Usability ›› Why Reading on Mobile Is Uniquely Challenging

Why Reading on Mobile Is Uniquely Challenging

by Paivi Salminen
4 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

You’re reading on your phone right now, probably skimming, maybe distracted. That’s exactly why mobile content is so hard to get right, even when the information is technically there, people miss it. Researchers found that comprehension on mobile can drop to just 19%, compared to 39% on desktop, using a simple test that removes words to see if readers actually understand what they’re reading. The real question isn’t whether your content looks good on a small screen. It’s whether it still makes sense when life gets in the way.

I’m reading a book called Mobile Usability by Jakob Nielsen and Raluca Budiu. Chapter 4 focuses on writing for mobile. Now, mobile reading is something I struggle with daily.

The small screen. Long sentences. Even longer paragraphs. Reading while waiting, multitasking, or only half paying attention. Much of today’s mobile content seems to assume ideal conditions for readers: a calm environment, full focus, and plenty of patience. In reality, mobile content lives in a very different cognitive environment from desktop content. Mobile content is more fragmented, more interruptible, and less forgiving of complexity.

Why reading on mobile is uniquely challenging

When people read on mobile, they rarely settle into a long, uninterrupted session. They skim. They jump around. They look for keywords. They abandon content quickly when it no longer makes sense. Because so little text is visible on screen at once, readers have to work harder to remember what came before. Add interruptions such as notifications, movement, or background noise, and comprehension becomes even more fragile.

This is why mobile users often miss important information even when it’s technically “there.” The problem isn’t laziness or lack of intelligence; it’s cognitive load. If understanding requires too much effort, people simply move on.

That leads to an important question for mobile usability: how do we know whether users actually understand what they’re reading?

A cloze test

One method comes from educational and reading research: a cloze test. The cloze test works by taking a piece of text and removing certain words, often every sixth or seventh word. Readers are then asked to fill in the missing words using context alone. If they understand the text, they can usually reconstruct it with reasonable accuracy. If they don’t, their answers quickly break down.

What makes the cloze test interesting is that it doesn’t test isolated skills such as vocabulary or grammar. They test whether meaning is being successfully constructed in the reader’s mind. In other words, they measure comprehension, not just readability.

Why cloze tests are relevant to mobile usability

The research by R.I. Singh and colleagues from the University of Alberta applied the cloze test to the privacy policies of ten popular websites (eBay, Facebook, Google, Microsoft, Wikipedia, YouTube, etc.). The results showed an average comprehension score of 39.18% on desktop and 18.93% on mobile. For text to be considered easy to understand, cloze scores typically need to be 60% or higher.

Even on desktop screens, users achieved only about two-thirds of the desired comprehension level. On mobile, comprehension dropped dramatically. This highlights just how complex and unreadable privacy policies tend to be and how much worse the problem becomes on smaller screens.

In the context of mobile usability, cloze tests are especially powerful because they expose subtle comprehension problems. A user may be able to scroll, tap, and complete a flow while still misunderstanding what they read. Analytics won’t reveal this. Even usability testing might miss it if participants hesitate to admit confusion.

The cloze test makes misunderstandings visible. When users consistently fail to restore missing words, it often points to deeper issues: overly complex sentence structures, abstract language, missing context, important information being introduced too late, etc. These are all common problems in mobile interfaces, particularly in onboarding flows, settings screens, and explanatory content.

Are cloze tests actually used in practice?

Despite their usefulness, based on my own experience and according to ChatGPT, cloze tests are not commonly used by mobile app teams. Most developers and designers have never heard of them. In industry, content is more often evaluated using readability formulas, style guides, or A/B tests that focus on behavior rather than understanding.

Cloze tests tend to live in academic papers, educational research, and occasional UX studies… Far from day-to-day product development. Part of the reason is that they sound academic. Another reason is that teams often assume comprehension problems will show up indirectly through metrics.

But when content is critical — think healthcare, finance, permissions, or legal explanations — those assumptions can be costly. The cloze test offers a low-tech, evidence-based way to check whether people actually understand what they read under mobile conditions.

How simple should mobile content be?

Most usability research converges on a clear recommendation: mobile content should be written at a lower reading level than desktop content.

A common target is somewhere between a 6th and 8th grade reading level, and even lower when the information is time-sensitive or high-stakes. This doesn’t mean oversimplifying ideas. It means expressing them with shorter sentences, clearer structure, and more concrete language.

Cloze tests are particularly useful here because they go beyond theoretical reading levels. They show whether real readers, on real devices, can maintain meaning as they read.

Key takeaways

Mobile usability isn’t only about layout, gestures, or navigation. It’s also about whether people can understand what they read in imperfect, distracted moments.

Cloze tests aren’t a silver bullet, and they won’t replace usability testing or good writing practices. But they do give us a simple way to test understanding directly. Even if they are rarely used in practice, they point us toward a more honest question we should be asking:

Does this content still make sense when life gets in the way?


References:

  1. Mobile Usability.Jakob Nielsen and Raluca Budiu. The Nielsen Norman Group, 2013.
  2. Evaluating the Readability of Privacy Policies in Mobile Environments. R. I. Singh, M. Sumeeth, and J. Miller. International Journal of Mobile Human Computer Interaction, vol. 3, no. 1 (January — March 2011), pp. 55-78.
  3. Reading Content on Mobile Devices. Kate Moran. The Nielsen Norman Group, December 11, 2016. Available: https://www.nngroup.com/articles/mobile-content/

The article originally appeared on Substack.

Featured image courtesy: Vitaly Gariev.

post authorPaivi Salminen

Paivi Salminen
Päivi Salminen, MSc, is a digital health innovator turned researcher with over a decade of experience driving growth and innovation across start-ups and international R&D projects. After years in the industry, she has recently transitioned into academia to explore how user experience and design thinking can create more equitable and impactful healthcare solutions. Her work bridges business strategy, technology, and empathy, aiming to turn patient and clinician insights into sustainable innovations that truly make a difference.

Tweet
Share
Post
Share
Email
Print
Ideas In Brief
  • The article explains why mobile reading is harder: small screens and distractions make people miss information even when it’s there.
  • It introduces the cloze test, which removes words to measure real understanding: comprehension drops from 39% on desktop to 19% on mobile.
  • The piece argues that mobile content needs simpler language because the real question is: Does this make sense when life gets in the way?

Related Articles

Discover what happens when we apply a cloze test to mobile.

Article by Paivi Salminen
Cloze Test in Practice
  • The article applies a cloze test, a practical comprehension tool, to real mobile privacy policies from Duolingo and X, demonstrating how even seemingly simple text can collapse in readability on small screens, and why closing the gap between reading level and user comprehension matters for mobile content design.
Share:Cloze Test in Practice
4 min read

Find out how pre-selected options silently shape decisions, and what ethical designers must do about it.

Article by Tushar Deshmukh
The Psychology of Defaults: How Pre-Selected Options Influence Behavior
  • The article argues that defaults quietly guide user decisions through inaction, making them far more powerful than most designers realize.
  • It highlights that they work by exploiting natural human tendencies like status quo bias and the assumption that pre-selected options are “recommended.”
  • The piece emphasizes that ethical design doesn’t eliminate defaults but uses them transparently, with user intent and easy reversibility at the core.
Share:The Psychology of Defaults: How Pre-Selected Options Influence Behavior
5 min read

Discover why your most irreplaceable asset isn’t the technology you use. It’s your humanity.

Article by Pavel Bukengolts
Reimagining Work: How Designing for Humanity Will Shape 2030
  • The article argues that creativity, empathy, and emotional intelligence aren’t threatened by AI but become more valuable as automation takes over routine tasks, freeing people to focus on complex, uniquely human challenges.
  • It highlights that the key to thriving in an AI-driven world is using technology to enhance human potential: optimizing environments for focus and well-being, rather than letting it overshadow the qualities that make us effective.
  • The piece emphasizes that as workplaces evolve toward 2030, empathy becomes a core leadership skill: the engine behind authentic collaboration and meaningful human connection in increasingly automated environments.
Share:Reimagining Work: How Designing for Humanity Will Shape 2030
5 min read

Join the UX Magazine community!

Stay informed with exclusive content on the intersection of UX, AI agents, and agentic automation—essential reading for future-focused professionals.

Hello!

You're officially a member of the UX Magazine Community.
We're excited to have you with us!

Thank you!

To begin viewing member content, please verify your email.

Get Paid to Test AI Products

Earn an average of $100 per test by reviewing AI-first product experiences and sharing your feedback.

    Tell us about you. Enroll in the course.

      This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and