We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Home ›› Business Value and ROI ›› 6 Key Questions to Guide International UX Research ›› UX Reviews: Which one? How long?

UX Reviews: Which one? How long?

by Frances Miller
7 min read
Share this post on


Selecting the right type of UX review to suit the project’s stage and business objectives, and to yield quick results for the client.

Reviewing the websites and applications of clients and their competitors is big business for UX professionals.

Expert evaluations, competitor reviews, and opportunity reviews provide key insights for clients, so choosing the right type of review for the job is a vital UX skill. Three questions about each type of review help make selection easier:

  • When is it best to use this type of review?
  • What is it most useful for?
  • What is it not good for?

Also, clients under business pressure often have aggressive deadlines and expect the most useful results in the shortest possible time, so deciding how long to spend on a review is crucial. For UX reviews, the 80:20 rule generally applies: 80% of the effects come from 20% of the causes. This is the best approach in most cases, as it helps UX reviews quickly identify the few vital issues that make the biggest difference in terms of results.

So let’s look at the three types of review, their uses, and how long to take for each.

Expert Evaluations

Expert evaluation methodology varies widely among practitioners. Most commonly, it involves a run-through and evaluation of the efficiency and effectiveness of the major user tasks, a heuristic evaluation, and an assessment of best practices in navigation, layout, IA, nomenclature and labeling, window and panel behavior, and many other attributes.

An expert evaluation should also focus on the achievement of business objectives, for example:

  • Does the product help build a relationship with the user?
  • Does it encourage the user to stay on the site?
  • Does it onsell or upsell effectively?
  • Does it support and complement other channels?

For an experienced UX practitioner, it takes a minimum of one day of analysis to yield the most important insights and provide key recommendations. A brief write-up that summarizes key findings and indicates some possible recommendations can happen in this time. Of course, preparing a full presentation or written report for circulation will take longer; preparing good communication takes time. An application with complex workflows and tasks may take longer than a day for analysis and longer for the write-up.

80:20 expert evaluations are best when:

  • the focus is on improvement rather than a wholesale redevelopment.
  • a quick handle on issues and areas for improvement is needed.

80:20 expert evaluations are good for:

  • identifying problems and blockages for users.
  • a quick snapshot of strengths and problems, as well as some recommended solutions.
  • setting priorities and developing a roadmap as a preparation for redevelopment.

80:20 expert evaluations are not good for:

  • identifying the unexpected. After all, the reviewer is not a real user experiencing successes and making mistakes. User testing can reveal surprising insights, which can be as diverse as the individuals involved.
  • identifying the new and innovative, although great ideas may naturally emerge.
  • assessing early concepts and designs.
Competitor Reviews

A competitor review can be comprised of a:

  • functional audit of client and competitor sites—“Do we have what they have?”
  • comparison of performance against standards, heuristics, and task completion to determine which competitor does it best and, more importantly, how the client stacks up.
  • comparison of salient characteristics—dimensions of the user experience, such as trust or personalization, that distinguish businesses in the competitive space.

Competitor reviews are most valuable when they focus on identifying gaps and opportunities for differentiation. They ask the question, “What does this website, application, or product offer that fills an important gap in the marketplace?” A UX expert can then design an experience that will bring this differentiator to life for the user.

If an expert evaluation has already been completed, I think a competitor review of four competitor websites will take a minimum of two days to yield the vital few insights and recommendations for improvements that will make the greatest impact. Once again, writing up the analysis and comparison of complex applications will take additional time.

In some sectors, businesses watch each other so closely and work so hard to keep up that inadvertently a pattern of similarity emerges between competitors in terms of functionality and performance. For various reasons, differentiating with new functions may not be an option at all, or at least, not any time soon. In these cases, a competitor review can still help a business stand out by improving the customer’s experience of a key existing business process, such as filling in an application form for a financial product or completing an e-commerce purchase. Improvements here can have an impressive impact on customer acquisition and the bottom line.

Michael Hawley, writing for UX Matters, suggests another useful competitor review approach. It entails mapping salient characteristics as a means of achieving differentiation. These reviews focus on identifying dimensions that distinguish competitors in a competitive space, such as trust or personalization. Each website is scored and plotted on a scale on a kiviat chart (which looks like as a spider’s web, with each dimension being a spoke). Comparing the diagrams of a client and their competitors reveals their relative performance on the salient dimensions and can reveal opportunities for differentiation that might otherwise be missed.

80:20 competitor reviews are best when:

  • starting anew or doing a substantial redesign.
  • the business knows exactly what it wants to achieve, e.g., differentiation by new feature or function, or significant improvement of an existing core process relative to competitors.
  • a picture of the position in the market can shape an impending project or make a case for investment in a project.

80:20 competitor reviews are good for:

  • identifying the minimum user experience to keep a client ahead of competitors.
  • seeing if competitors already have a client’s planned differentiator or if a perceived gap actually does exist.
  • identifying new UX differentiators.
  • generating ideas; competitor practices can be a springboard for creativity.

80:20 competitor reviews are not good for:

  • knowing what users really want and would be delighted by. A competitor review just reveals what they are currently getting.
  • innovating. Designers need to be careful not to have their thinking unconsciously narrowed by knowing what competitor features, functions, and designs are like.
Opportunity Reviews

An opportunity review isn’t limited to competitors, the same domain, or even the same industry. UX designers are constantly scanning the environment for inspiration, ideas, and models. An opportunity review makes the investigation more purposeful. For example, when working on a website that requires a complex decision-making process (e.g., choosing a mortgage), it might be helpful to look at how complex decision-making is supported in another domain (such as selecting a university course). The idea is to help a client leapfrog competitors by coming up with something completely new and different.

An opportunity review should involve two or more UX experts, at least at the outset. They analyze the brief and draw on their own experience and knowledge to identify great opportunities, and then research even further afield. The opportunities unearthed may be vast. They need to be assessed for feasibility and for appropriateness to business objectives and user needs.

Once again, I think the best opportunity review is a quick opportunity review. It is important to maintain momentum and keep in touch with fellow reviewers. A time-limited research period (say, one day) ensures the brief stays top of mind and no one gets lured away by the siren call of coolness. Then it’s quickly on to assessment, organization, and reporting.

80:20 opportunity reviews are best when:

  • starting anew or doing a substantial redesign.
  • when there’s clarity about which precise objectives are to be achieved by the website, application, or product. The focus of the investigation is then on finding new and creative options for how they are to be achieved.

80:20 opportunity reviews are good for:

  • innovating in the UX Design Studio.
  • loosening up the client’s or stakeholder’s preconceptions, helping to get buy-in to a new approach.
  • exploring different conceptual model solutions.

80:20 opportunity reviews are not good for:

  • borrowing designs and ideas wholesale. It is a big mistake to adopt something just because it is cool; above all, ideas must fit the user and business needs.
  • clients who need concrete facts and figures or who are not yet ready to make an imaginative leap.
  • detailed UX design.

Beyond 80:20

Although it’s generally best to keep reviews pragmatically short, like everything in life, there are exceptions.

Some research objectives require a more in-depth review and a different methodology that takes longer and involves more detailed analysis and reporting.

For example, an expert evaluation methodology that focuses on auditing will quantify performance, or lack of it, over time. This type of review evaluates heuristics and best practices, and goes further to systematically assess the efficiency and effectiveness of all user tasks as well. It:

  • provides scores for heuristics and for all tasks as a baseline.
  • measures performance against the baseline at intervals in the future.
  • draws conclusions and makes recommendations on the basis of the improvement or decline shown by the comparison.

As a result, the evaluation reports are extremely comprehensive and detailed.

In an article in UX Matters, Paul Bryan makes a case for what he calls “competitive benchmarking.” This process is similar to the 80:20 competitor review but is much more in-depth, providing a very detailed picture of relative positioning of competitors in the marketplace. While competitive benchmarking is resource intensive, it allows a client to make comparisons in relation to the identified parameters over time. The client has a moving picture of their performance in the marketplace rather than just a snapshot.

These approaches move UX reviews from a one-off basis to more in-depth, longitudinal research. The association with the client can continue for a much longer time, with greater costs and but also significant benefits for the client.


Three of the biggest challenges for a UX consultant are:

  • meeting the client’s brief
  • managing client expectations
  • time pressure

Careful consideration of UX reviews and time allocated for them can help with all three. Selecting the right type of review to suit the client’s project stages and business objectives will increase client satisfaction. Also, clearly explaining the benefits of each type of review, and what the client will and won’t get, helps manage their expectations.

In terms of time pressure, the 80:20 approach will provide the most benefit for hours spent, and thus better value for the client’s money. However, keep in mind that time is not the only consideration. In some cases, going beyond 80:20 reviews to more in-depth methodologies may add value for clients that will have them coming back for more.

post authorFrances Miller

Frances Miller,

Frances has ten years experience as Principal User Experience designer in user experience (UX) design agencies with clients in a range of industries including Finance, Insurance, Travel, Education, Natural Resources, and Government at PTG Global.  She has worked on many UX projects, gathering requirements and designing and testing UX design for websites, complex applications, mobiles and tablets.

In the past, Frances has also has also worked in strategic and business planning, communications and marketing, and now works with businesses to align their digital strategy with business outcomes.  She likes the challenge of tackling the new and complex and has worked on feasibility projects for startups and has scoped large, multi-faceted projects.



Related Articles

A framework for diagnosing a study with conflicting or misleading results, and what to do about it.

Article by Lawton Pybus
How to Think About UX Research Failures
  • The article examines how UX research studies can fail due to issues in design, analysis, and generalization, using case studies to highlight each category’s importance in maintaining research integrity and relevance.
Share:How to Think About UX Research Failures
6 min read

Did you know UX Magazine hosts the most popular podcast about conversational AI?

Listen to Invisible Machines

This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and