Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Home ›› Business Value and ROI ›› 6 Key Questions to Guide International UX Research ›› When Does Quantity Become Quality? How to navigate big data

When Does Quantity Become Quality? How to navigate big data

by Michael Lai
6 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

Thoughtful inspection of qualitative and quantitative data allows UX professionals to glean genuine insight on user behavior from reams of big data.

“What I discovered yesterday was that we are now seeing, for the first time, what happens when quantity becomes quality.”—Gary Kasparov, after winning against IBM’s Deep Blue in game two to draw level in the six game series

Whatever your view is regarding the definition or competencies of a UX designer, there is no question that we work at the interface between human and computer interactions. Therefore, it should come as no surprise that a solid understanding of the relevant technology and user behavior is required to deliver the best possible user experience.

In the current crop of UX designers, strategists, and architects entering the field, from either a visual design or software development background, there has been a steady increase in the demand for experience in research related disciplines. Research skills fill a gap in the UX team by going beyond the visual and interaction design in an effort to understand the motivations of user behavior, using a combination of qualitative and quantitative research techniques.

Techniques established in statistical and bioinformatics analysis already represent a significant proportion of the current practices commonly seen in user behavior research performed as part of UX activities such as A/B testing and heat maps. With the growing number of usability tests being conducted, the depth and breadth of user behavior knowledge base is expanding at a rapid rate.

While “Big Data” is causing excitement in the IT and business sector, the practice and concept of data analysis is not unfamiliar to those coming from a research background. The intersection of business intelligence and user-centric design therefore creates both demand and opportunity for UX professionals with specialist knowledge in user research to create more impact with their work.

Qualitative and quantitative methods let you treat a problem by linking symptoms to the root cause

Aside from the hype generated by marketing, the real challenge is dealing with data that is being collected in greater amounts at a faster rate from more sources: the so-called “Three Vs (Volume, Velocity, Variety) of Big Data.” To help you navigate the oceans of information, here are some general pointers about metrics and data analysis to set you on the right course.

Qualitative and Quantitative

Generally speaking most usability metrics deal with either qualitative or quantitative data. Qualitative researchers ask broad questions of their subjects with the intention of uncovering patterns and trends. This type of research is typified by the classic Likert scale, commonly seen in online customer surveys, where the questions and answers don’t easily lend themselves to comparisons or sophisticated analysis.

More complex examples of qualitative data include: video footage of user interactions with the system, tester observations of user navigation pathways, notes taken about problems experienced, comments/recommendations, and answers to open-ended questions.

By contrast, the quantitative researcher asks specific questions with a narrow focus, from which they collect numerical data samples from the participants to try and answer the question.

Quantitative data is usually associated with information collected through anonymous user statistics, eye tracking, click-mapping, and other methods that yield specific measurements which can then be subjected to rigorous statistical, mathematical, or computational analysis techniques.

Some examples of quantitative data include: success rates, task time, error rates, and satisfaction questionnaire ratings. However, the distinction isn’t always so clear because when you start collating qualitative research results it is possible to quantify qualitative data, which can be interrogated further.

With bigger, better, and faster tools on the horizon, we won’t need to make the distinction between the two separate areas as we can simply choose the appropriate tools for the questions we want to ask. However, when resources allocated for user or usability research are scarce or inadequate, it’s not uncommon to see only qualitative or quantitative research conducted because the time, cost, and personnel may not always be at the available. Even if your usability research is contracted out or outsourced to specialist agencies, care must be taken to remain critical and scrutinize the results generated.

 

Qualitative or Quantitative?

There are some very common pitfalls when you are dealing exclusively with results derived from only qualitative or quantitative studies, and hopefully you will also come to the conclusion that combining both as part of your usability research and testing is the best way to go.

It’s a logical assumption that for direct feedback about usability you should always go straight to the user and record attitudes, feelings, and behaviors. However, there’s enough literature and research out there to suggest that what users say doesn’t always reflect what they actually do or think. Qualitative research is much more likely to reveal what the users think and do, but it doesn’t always show how or why they do it.

Qualitative analysis allows us to test and validate user behavior assumptions in a clear and unambiguous manner, but is limited to the complexity of the questions that can be asked and the ability to dissect or drill-down into more specific details. Nevertheless, it remains a very valuable tool for framing the big picture questions required at the beginning of the usability studies and should always be the starting point for UX practitioners. It also provides a focus for quantitative research and the context for the results that it produces.

The use of quantitative research appears as a double-edged sword for many organizations, and there are many inherent risks involved in relying on the figures alone. The volume and detail of data collected can provide very powerful insight for the decision makers, but there are also many drawbacks in terms of the effort required to obtain and process the information, not to mention the ability to distil useful insights from the background noise.

It’s also easy to jump to the wrong conclusions based on the weight of evidence that can be open to interpretation without fully understanding the whole context of the user interaction. Just consider how difficult it is to come up with the correction interpretation for a series of actions when any number of motivations and reasons can be hidden behind a single mouse click.

Since there are many reasons why user behavior is not reflected in qualitative research alone, it is important to use quantitative research results to verify qualitative research results. For example, if users feel that the system is slower but benchmark tests gathered from usage statistics show that it is in fact faster, then it’s important to look at other usability issues that might be causing the problem.

Similarly, if users feel that the task-completion rate is higher but more mistakes are being logged, it’s crucial to look at interface or workflow issues. When dealing with the complex behavior of users, it is impossible to take into account all the factors behind every action and decision. This is a good reason to try and combine multiple streams of quantitative data and qualitative data in your studies so you can keep on the right track.

Conclusion

With all the data we’re collecting, it’s important to step back and think about what all of these values represent. Research is as much about asking the right questions as it is about being able to come to the correct conclusions. Some warn against defining and collecting metrics that may not be of any value, and suggest that we should instead put more effort into things that we can understand and action.

Data should not be used for finding evidence to support our own opinions and assumptions. Data gives us an opportunity for to reach out to more users and understand them better. This comes with greater responsibility for UX professionals, who need to exercise more rigorous testing and validation of the “insights” gleaned before rushing to implementing them, as Lou Rosenfeld has pointed out.

One of the key take-home messages for UX practitioners from Comparative Usability Evaluation 8, in Rolf Molich’s which DialogDesign looked at usability parameters for the Budget car rental website, is to “Combine qualitative and quantitative findings in your report. Present what happened and support it with why it happened.” Applying qualitative and quantitative research methods you are able to treat a problem by linking the symptoms (what happened) to the root cause of the problem (why it happened).

With this in mind, I hope you’ll consider including both qualitative and quantitative data as part of your next UX research project.

 

Image of chess pieces courtesy Shutterstock

post authorMichael Lai

Michael Lai
Michael Lai is a freelancing and consulting UX architect specializing in infographic and data visualization design. He has worked and consulted in a number of different industries (hospitality, research, IT, science, and engineering) and covered many UX related roles including user research, copywriting, training, graphic design, business analysis, and information architecture. In his spare time he is working on iPhone app and hardware integration, startups, and non-profit organizations. Portfolio link: https://.dropbox.com/sh/5ojklvmarnhh5vz/xZBo1Pr96A 

Tweet
Share
Post
Share
Email
Print

Related Articles

Is true consciousness in computers a possibility, or merely a fantasy? The article delves into the philosophical and scientific debates surrounding the nature of consciousness and its potential in AI. Explore why modern neuroscience and AI fall short of creating genuine awareness, the limits of current technology, and the profound philosophical questions that challenge our understanding of mind and machine. Discover why the pursuit of conscious machines might be more about myth than reality.

Article by Peter D'Autry
Why Computers Can’t Be Conscious
  • The article examines why computers, despite advancements, cannot achieve consciousness like humans. It challenges the assumption that mimicking human behavior equates to genuine consciousness.
  • It critiques the reductionist approach of equating neural activity with consciousness and argues that the “hard problem” of consciousness remains unsolved. The piece also discusses the limitations of both neuroscience and AI in addressing this problem.
  • The article disputes the notion that increasing complexity in AI will lead to consciousness, highlighting that understanding and experience cannot be solely derived from computational processes.
  • It emphasizes the importance of physical interaction and the lived experience in consciousness, arguing that AI lacks the embodied context necessary for genuine understanding and consciousness.
Share:Why Computers Can’t Be Conscious
18 min read

AI is transforming financial inclusion for rural entrepreneurs by analyzing alternative data and automating community lending. Learn how these advancements open new doors for the unbanked and empower local businesses.

Article by Thasya Ingriany
AI for the Unbanked: How Technology Can Empower Rural Entrepreneurs
  • The article explores how AI can enhance financial systems for the unbanked by using alternative data to create accessible, user-friendly credit profiles for rural entrepreneurs.
  • It analyzes how AI can automate group lending practices, improve financial inclusion, and support rural entrepreneurs by strengthening community-driven financial networks like “gotong royong”.
Share:AI for the Unbanked: How Technology Can Empower Rural Entrepreneurs
5 min read

Imagine a world where coding is no longer reserved for the tech elite. In his latest article, Chris Heilmann explores how AI is revolutionizing software development, enabling a new generation of creators to build applications without needing deep technical knowledge. Discover how AI tools are breaking barriers, empowering millions to shape the digital world, and what this means for the future of coding and innovation.

Article by Christian Heilmann
A Billion New Developers Thanks to AI?
  • The article explores how AI is poised to empower a billion new developers by simplifying the coding process for non-technical users, allowing more people to build applications with ease.
  • It analyzes how AI-assisted development can bridge the global developer gap by enabling faster code generation, reducing the complexity of software development, and expanding access to tech careers.
Share:A Billion New Developers Thanks to AI?
15 min read

Tell us about you. Enroll in the course.

    This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and