I’m reading a book called “Mobile Usability“ by Jakob Nielsen and Raluca Budiu. Chapter 4 focuses on writing for mobile. Now, mobile reading is something I struggle with daily.
The small screen. Long sentences. Even longer paragraphs. Reading while waiting, multitasking, or only half paying attention. Much of today’s mobile content seems to assume ideal conditions for readers: a calm environment, full focus, and plenty of patience. In reality, mobile content lives in a very different cognitive environment from desktop content. Mobile content is more fragmented, more interruptible, and less forgiving of complexity.
Why reading on mobile is uniquely challenging
When people read on mobile, they rarely settle into a long, uninterrupted session. They skim. They jump around. They look for keywords. They abandon content quickly when it no longer makes sense. Because so little text is visible on screen at once, readers have to work harder to remember what came before. Add interruptions such as notifications, movement, or background noise, and comprehension becomes even more fragile.
This is why mobile users often miss important information even when it’s technically “there.” The problem isn’t laziness or lack of intelligence; it’s cognitive load. If understanding requires too much effort, people simply move on.
That leads to an important question for mobile usability: how do we know whether users actually understand what they’re reading?
A cloze test
One method comes from educational and reading research: a cloze test. The cloze test works by taking a piece of text and removing certain words, often every sixth or seventh word. Readers are then asked to fill in the missing words using context alone. If they understand the text, they can usually reconstruct it with reasonable accuracy. If they don’t, their answers quickly break down.
What makes the cloze test interesting is that it doesn’t test isolated skills such as vocabulary or grammar. They test whether meaning is being successfully constructed in the reader’s mind. In other words, they measure comprehension, not just readability.
Why cloze tests are relevant to mobile usability
The research by R.I. Singh and colleagues from the University of Alberta applied the cloze test to the privacy policies of ten popular websites (eBay, Facebook, Google, Microsoft, Wikipedia, YouTube, etc.). The results showed an average comprehension score of 39.18% on desktop and 18.93% on mobile. For text to be considered easy to understand, cloze scores typically need to be 60% or higher.
Even on desktop screens, users achieved only about two-thirds of the desired comprehension level. On mobile, comprehension dropped dramatically. This highlights just how complex and unreadable privacy policies tend to be and how much worse the problem becomes on smaller screens.
In the context of mobile usability, cloze tests are especially powerful because they expose subtle comprehension problems. A user may be able to scroll, tap, and complete a flow while still misunderstanding what they read. Analytics won’t reveal this. Even usability testing might miss it if participants hesitate to admit confusion.
The cloze test makes misunderstandings visible. When users consistently fail to restore missing words, it often points to deeper issues: overly complex sentence structures, abstract language, missing context, important information being introduced too late, etc. These are all common problems in mobile interfaces, particularly in onboarding flows, settings screens, and explanatory content.
Are cloze tests actually used in practice?
Despite their usefulness, based on my own experience and according to ChatGPT, cloze tests are not commonly used by mobile app teams. Most developers and designers have never heard of them. In industry, content is more often evaluated using readability formulas, style guides, or A/B tests that focus on behavior rather than understanding.
Cloze tests tend to live in academic papers, educational research, and occasional UX studies… Far from day-to-day product development. Part of the reason is that they sound academic. Another reason is that teams often assume comprehension problems will show up indirectly through metrics.
But when content is critical — think healthcare, finance, permissions, or legal explanations — those assumptions can be costly. The cloze test offers a low-tech, evidence-based way to check whether people actually understand what they read under mobile conditions.
How simple should mobile content be?
Most usability research converges on a clear recommendation: mobile content should be written at a lower reading level than desktop content.
A common target is somewhere between a 6th and 8th grade reading level, and even lower when the information is time-sensitive or high-stakes. This doesn’t mean oversimplifying ideas. It means expressing them with shorter sentences, clearer structure, and more concrete language.
Cloze tests are particularly useful here because they go beyond theoretical reading levels. They show whether real readers, on real devices, can maintain meaning as they read.
Key takeaways
Mobile usability isn’t only about layout, gestures, or navigation. It’s also about whether people can understand what they read in imperfect, distracted moments.
Cloze tests aren’t a silver bullet, and they won’t replace usability testing or good writing practices. But they do give us a simple way to test understanding directly. Even if they are rarely used in practice, they point us toward a more honest question we should be asking:
Does this content still make sense when life gets in the way?
References:
- “Mobile Usability.” Jakob Nielsen and Raluca Budiu. The Nielsen Norman Group, 2013.
- “Evaluating the Readability of Privacy Policies in Mobile Environments.” R. I. Singh, M. Sumeeth, and J. Miller. International Journal of Mobile Human Computer Interaction, vol. 3, no. 1 (January — March 2011), pp. 55-78.
- Reading Content on Mobile Devices. Kate Moran. The Nielsen Norman Group, December 11, 2016. Available: https://www.nngroup.com/articles/mobile-content/
The article originally appeared on Substack.
Featured image courtesy: Vitaly Gariev.
