Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Home ›› Contextual User Studies ›› Pareto Principle-Based User Research

Pareto Principle-Based User Research

by Jennifer Aldrich
6 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

By applying the Pareto Principle to user research, you can identify the top percentage of your product’s usability issues and feature gaps, then jump in and fix them.

A few years ago I stumbled across an amazing blog post on MeasuringUsability.com by Jeff Sauro, and had an epiphany. He had outlined the way he was conducting Pareto Principle-based user research, and I realized that I could modify what he was doing to obtain data that my organization had really been struggling to uncover up until that point.

What is the Pareto Principle?

In Richard Koch’s book The 80/20 Principle: The Secret of Achieving More with Less, he details how in 1897, a brilliant researcher named Vilfredo Pareto discovered that the majority of the wealth in England and other countries was predictably controlled by a small minority of the population. Pareto’s research is also known as the 80/20 rule and Law of Vital few, among other names.

In the early 1900s, Joseph M. Juran discovered Pareto’s research and realized that the concept also applied to tons of other situations in life: a tiny percent of criminals caused most of the crime, a small percentage of dangerous processes caused a majority of accidents, etc.

He also realized that the concept could be applied to improve consumer and industrial goods. He created a consulting service to work with companies to identify top areas they could improve upon to make the most impact with product enhancements.

Linking the Pareto Principle to User Research

By applying the Pareto Principle to user research, you can identify the top percentage of your product’s usability issues and feature gaps, then jump in and fix them.

The 80/20 rule is a powerful, fast, and cheap way to evaluate how you can pack the most UX punch

Imagine if you could make a tiny code change, and vastly improve your product UX. Oftentimes, you can.

In my case, the principle was startlingly accurate. Our research showed that 18% of our core product areas were causing 83% of our clients’ frustrations.

Would a statistician or professional researcher cringe and shed some tears if they saw the method and data I’m about to show you? Absolutely. We aren’t even calculating standard deviations here. But does the average stakeholder care about that? Nope.

This method is for those who don’t have a background in research or statistics, or for experienced professionals who just need some quick and dirty data. It’s a powerful, fast, and cheap way to quickly evaluate how you can pack the most UX punch when you’re planning improvements to your product or service.

Enough talking, let’s start doing. I’m going to outline all of the steps you will need to take to replicate this research for your organization.

The Research Process

Step 1: Recruit research subjects.

Do you have a list of your users? Email them and ask if they’d be interested in joining a “special community of customers that will have the opportunity to impact future changes to the product.” You don’t have to pay people; just being able to leave their fingerprint on the product is often more than enough motivation to get them involved in the research process.

In this case study, we didn’t pay our subjects a dime. They were all excited to be part of the community, and gave us candid, brutally honest feedback.

Step 2: Create your survey.

Making a survey is inexpensive–possibly even free–with Google Forms, or a tool like Survey Monkey.

You’re going to ask exactly two questions in your survey:

  1. If you could change one aspect of our product, what would you change? (Provide a list of all core content areas and allow only one selection. Do not include an other option.)
  2. How would you change it, and why would you make that change? (Make this question open-ended).

Step 3: Launch your survey.

To launch your survey, you can use a plain old email list, or you can get a free subscription to a service like MailChimp.

I prefer MailChimp–and here’s why:

  • Intuitive dashboard
  • Great for tracking open rates, click rates and other fun stats
  • Easily create lists and groups
  • Simple campaign templates
  • Unsubscribe/spam rules are handled for you

Step 4: Analyze your data.

After you launch your survey campaign, you’ll be flooded with responses–and data! Don’t get overwhelmed–analyzing the data isn’t that intense.

You’ll start by calculating the total responses per core product area.

Part 1: Calculate Totals

  1. Export your survey data to a spreadsheet
  2. Sort it by core product area (the ones in your multiple choice question)
  3. Calculate totals for how many responses came from each core product area.
  4. Order them from most to least.

In this case study, my results looked like this (out of 40 functional areas):

Number of responses

  • Headlines – 26
  • Editor – 21
  • Files and Folders – 21
  • Groups – 17
  • Calendar – 12
  • Reports – 8
  • Other (remaining core functional area aggregate responses) – 26

Next up you’ll determine the percentage of responses per core product area divided by the total number of responses.

Part 2: Calculate Product Area Percentages

Now look at the total number of respondents and do some quick math.

Let’s say I had 152 respondents total.

Take the total responses for each key area, and divide it by the overall number of respondents to get the percentage of respondents who identified each key area.

In this case study, my results looked like this:

  • Headlines – 26/152 = (17%)
  • Editor – 21/152 = (14%)
  • Files and Folders – 21/152 = (14%)
  • Forms and Surveys – 21/152 = (14%)
  • Groups – 17/152 = (11%)
  • Calendar – 12/152 = (8%)
  • Reports – 8/152 = (5%)
  • Remaining 33 areas total = 26/152 = (17%)

Using this information I was able to gather some very useful data:

83% of the 152 responses fell into 7 key functional areas, 17% fell into other functional areas.

The 7 key areas identified make up 18% of the 40 key functional areas. (7/40 = 18% rounded)

Therefore, 18% of our key functional areas were causing 83% of our clients’ frustrations. So, my results wound up being really close to 80/20! The areas the research identified were shocking, we were expecting completely different results. We acted on these results and ran the study the following year, and the areas we’d adjusted were knocked out of the list of issues, and we wound up with another set of data with new areas identified that aligned with the 80/20 rule. We adjusted those and replicated the study a 3rd year in a row, and once again, wound up with similar results. Our team (myself included) was pretty astounded by the consistency of the results year after year.

Step 5: Create a report.

Now the fun part: Weaving the data into a simple, skim-able report for stakeholders.

In my instance, I started the report with an overview of the Pareto Principle in 2 sentences, then mentioned that this study broke down to 83% of reported issues stemming from 18% of our core product areas.

Next I gave them the high level stats for the 7 areas of concern:

  1. Headlines 17%
  2. Editor 14%
  3. Files & Folders 14%
  4. Forms and Surveys 14%
  5. Groups 11%
  6. Calendar 8%
  7. Reports 5%

Finally, I grouped the detailed user feedback by functional area. For example, I gave a heading of “Headlines” and then provided a bulleted list of all of the detailed feedback customers gave in response to the open ended feedback question.

It made a neat package that my stakeholders loved. At a glance they could see the big picture, but if they wanted to deep dive into individual pieces of feedback, they could.

These research findings made a big impact on decisions that guided our annual product roadmap planning. We were able to identify usability issues, areas that needed UX love and even product gaps based on the research findings. Then we followed up with our customer base to conduct additional user research in person and through phone interviews to verify the results and dig deeper to make sure we were solving the right problems.

Why should your organization take advantage of this research style?

Conducting Pareto Principle based user research is a trifecta. You obtain a clear view of really powerful data, your clients get excited about participating in product research and feel that they are really being heard, and the method is simple, cheap and effective.

It’s a great research method that can be done by professional researchers and novices alike.

If you give a Pareto Principle-based user research study a shot, I’d love to hear about your experience and your results! Hit me up on Twitter and let me know how things go.

Tools Mentioned:

Image of notebook courtesy of Dariusz Sankowski / Unsplash.

post authorJennifer Aldrich

Jennifer Aldrich, Jennifer Aldrich is a UX & Content Strategist at InVision and a UX Blogger at UserExperienceRocks.com. She's also a social media addict, a cartoon doodler, and an amateur photographer. She has had the opportunity to present at the UXPA International Conference, participate as a panelist in the AIGA Women's Leadership initiative, present at the 2013, 2014 & 2015 Web Conferences and has guest lectured at Millersville University. She also serves as a member of the National Program Advisory Committee for Independence University. Jennifer has had the opportunity to contribute to UXMag, Net Magazine, MediaShift, InVision Blog, and A Beautiful Site, among other publications. She is fascinated by the intersection of Psychology and UX, and loves having the opportunity to make the world a slightly better place each day, through design.

Tweet
Share
Post
Share
Email
Print

Related Articles

Article by Eleanor Hecks
8 Key Metrics to Measure and Analyze in UX Research
  • The article outlines eight essential metrics for effective UX research, ranging from time on page to social media saturation
  • The author emphasizes the significance of these metrics in enhancing user experience and boosting brand growth.

Share:8 Key Metrics to Measure and Analyze in UX Research
6 min read

A framework for diagnosing a study with conflicting or misleading results, and what to do about it.

Article by Lawton Pybus
How to Think About UX Research Failures
  • The article examines how UX research studies can fail due to issues in design, analysis, and generalization, using case studies to highlight each category’s importance in maintaining research integrity and relevance.
Share:How to Think About UX Research Failures
6 min read

Did you know UX Magazine hosts the most popular podcast about conversational AI?

Listen to Invisible Machines

This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and