Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

All Together Now

by Andrew Wirtanen, Colin Butler
7 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

Bringing stakeholders, usability experts, and domain experts together can make for a very powerful evaluation method.

Jakob Nielsen and Rolf Molich created the first list of usability heuristics—10 simple guidelines for creating optimal usability—over 20 years ago. Since then, heuristic evaluations have become one of the most popular usability evaluation methods, but little has changed in how they are typically conducted.

The term “heuristic evaluation” itself reflects the design of the method as performed by a single evaluator who is not necessarily a usability expert, where the set of heuristics would, in theory, provide an objective basis for finding issues. Nielsen himself quickly discovered, however, that usability experts find issues better than non-experts. Thus, heuristic evaluations are now typically conducted solely by usability experts, leading to the common use of the term “expert review”.

Since the industry has come to use the terms “heuristic evaluation” and “expert review” interchangeably, we will be treating them as equivalent in this article. We’ll talk a bit about the disadvantages of classical expert reviews, and then introduce a new method that attempts to address those shortcomings without compromising the strengths of the technique.

Trust Me, I’m an Expert

In our experience working in an agency setting, expert reviews sell well. They sound sexy and they don’t require the client to lift a finger. It really seems as simple as: “insert money, find issues.” Fix those issues and all is well, right?

Unfortunately, it’s not that easy. Expert reviews come with plenty of caveats. The first you’re likely to encounter is that sending a usability expert in to review a system, sight unseen, can lead to a number of roadblocks preventing or slowing their navigation through the software. While some of these will be genuine usability issues, there will be a number of problems that arise as a result of the expert lacking domain knowledge. As statistics whiz Jeff Sauro notes, issues that aren’t really issues (false positives) are one of the biggest complaints about expert reviews.

Communicating the issues found by an expert review can also pose problems. Experts may run into difficulties and need guidance from the client to clarify and progress further. Another snag: problems may be communicated and never addressed—or ignored entirely. At the 2012 UPA International Conference, Cory Lebson presented results from the culmination of several years of work for a government client. In his case, traditional heuristic reviews with no client involvement led to a number of valid recommendations, but there was no evidence that any of the issues were attended to.

Another common danger in relying on expert reviews is something known as the “evaluator effect”, which is the difference in usability problems detected and in the severity assigned to each by different evaluators—look to “The Evaluator Effect: A Chilling Fact About Usability Evaluation Methods” by Morten Hertzum and Niels E. Jacobsen for a full dissection. Basically, a single review performed by a single evaluator suffers from this effect when the reviewer isn’t able to find all of the problems and is also relying on subjective values for how severe an issue might be. Performing multiple reviews with different reviewers reduces this effect, but at an additional cost and with rapidly diminishing returns.

Building a Better Expert Review

At least one method attempts to address the evaluator effect: the group-based expert walkthrough, as described by Asbjørn Følstad, has a group of domain experts step through a series of defined tasks and identify issues they find along the way. This reduces the evaluator effect by using multiple reviewers, but also replaces usability experts with domain experts. In this scenario, some issues may be missed since domain experts may not know usability best practices and can fall victim to common review errors. They will also be less likely to identify the sorts of issues experienced by non-expert users.

Lebson also made an attempt at addressing the problem of clients ignoring his recommendations. His solution was to communicate directly with the client instead of just dropping a document into someone’s inbox. He found that with this small change, the client acted on 70% of his recommendations. So between the group-based expert walkthrough’s strength in numbers and Lebson’s suggestion of engaging the client directly, there should be a way to leverage both approaches to maximize client response and minimize the evaluator effect.

The method we recommend involves direct interaction and multiple reviewers, combining usability experts with domain experts. The output can look very similar to a traditional expert review, but issues are discovered by both sets of experts. We call it a “participatory expert review,” as stakeholders contribute directly to the review process as domain experts.

Steps for a Participatory Expert Review

  1. Organize a group of usability experts and domain experts.
  2. The group should include between three and 10 participants, as is commonly recommended for productive brainstorming groups. These experts should be as diversified as possible. Recent research suggests that novice usability evaluators find different issues than seasoned evaluators, so if you are considering using three usability experts, consider including one who is new to the field. Try to select domain experts who reflect the experience of a variety of stakeholders.

  3. Prepare the room for maximum productivity.
  4. Make sure that the room you have chosen for your review can accommodate all of your participants—remote stakeholders can contribute via speakerphone or teleconferencing. Everyone should be able to see a whiteboard or easel pad, where one usability expert will transcribe issues brainstormed by the group. Provide each participant with a list of heuristics and additional paper for notes.

  5. Have the facilitator and participants introduce themselves.
  6. One of the usability experts, preferably one with experience moderating focus groups or brainstorming sessions, will facilitate the meeting. After the facilitator introduces him or herself, the other participants take turns introducing themselves and describing their expertise. In the “Brainstorming” chapter of his book User Experience Re-Mastered, Chauncey Wilson suggests keeping it brief—a minute or less per person. If there is a lot of diversity among the participants, consider a warm-up exercise, like having participants talk about their favorite vacation place.

  7. Explain user experience and heuristics.
  8. Despite “UX” being added to the Oxford English Dictionary this year, not everyone has even a basic understanding of the field yet. Have the facilitator explain UX and some of the associated job responsibilities along with the goals of the exercise. Next, the facilitator should walk the group through the list of usability heuristics, providing a brief explanation for each. You can use either a custom list of heuristics or an established list like Nielsen and Molich’s.

  9. Walk through the interface and identify issues!
  10. One person “drives” the group through the interface, pausing when an issue is identified. If the interface is not familiar to the usability experts, a domain expert should be the driver. That being said, it is important that the usability experts question any assumptions made while driving. For instance, if a link is ignored, assuming that the user will know it’s unimportant, be sure to consider it as though it were still a valid option.

    As issues are identified, have one usability expert write them on the whiteboard, leaving room below each for recommendations later. Use two colors to record findings: green for positive findings and red for issues. Do not worry about severity or recommendations yet, since your focus should be on identifying issues first.

    Like in focus groups, the facilitator should ensure the group works effectively and that everyone is contributing. If the whiteboard is full, take a picture, erase the board, and keep going.

  11. Address recommendations and assign severity ratings.
  12. After you’ve made it through the interface and collected issues, it’s time to discuss recommendations and severity ratings as a group. What will it take to solve each of the issues? Which issues are the most important to address? Use asterisks or numbers to denote how severe an issue is and write the recommendations directly below each issue. You may also wish to denote which issues are quick fixes.

  13. Transcribe issues and send the list to all of the participants. Identify more issues if necessary.
  14. You can now dismiss your participants. Make sure to take a picture of the whiteboard. Transcribe all of the issues, their severity ratings, and recommendations. If necessary, you may want to spend more time identifying any issues that were missed in the meeting or elaborating on recommendations.

    Send the list of issues to all of the participants and ask them if they have further input. If you are not in close contact, follow up with the team after a week or two to see if they need any help addressing the issues.

Our Experience

We’ve used the participatory expert review method several times and it appears to be very effective. In one review, a developer began addressing some of the quick issues on a laptop while we were still in the meeting (not at all recommended, but a promising sign that the method fosters engagement and responsiveness). So far, clients seem more receptive to issues, but we have not yet measured how many of the issues were addressed.

Potential Challenges

Participatory expert reviews may not solve all of the problems with traditional expert reviews, and they have their own challenges as well. For example, it may be difficult to organize and facilitate a participatory expert reviews remotely.

Since the participatory expert review is done on-the-fly, hasty recommendations are inherently encouraged. However, it’s important to note that some issues may be missed entirely, so we suggest that usability experts walk through the interface again after the meeting. If the interface or product is too large to walk through in one meeting, you may have to schedule multiple meetings or a long meeting with breaks as needed.

When discussing issues and recommendations, the group may digress and drive the process off-track. If that happens, the facilitator should take note and get the group back on topic.

Conclusion

Are your expert reviews as effective as they can be? Group reviews have been shown to find more issues, and direct involvement with stakeholders has been demonstrated to lead to more of those issues being addressed. A participatory expert review is one way to involve stakeholders in the process, find more issues, and maybe lead to better results than a traditional expert review.

 

Ant teamwork image courtesy of Shutterstock

 

post authorAndrew Wirtanen

Andrew Wirtanen,

Andrew Wirtanen is a Lead Product Designer at Citrix in Raleigh, NC. He is an executive council member of the Triangle UXPA, and former president of the organization. He has experience with many usability engineering and interaction design methods. Prior to joining Citrix in 2013, he worked in consulting environments for six years. Andrew holds a master's degree in Human Factors in Information Design from Bentley University. He is @awirtanen on Twitter.

post authorColin Butler

Colin Butler,

Colin Butler is a graduate of North Carolina State University with an MS in Human-Computer Interaction. He believes that the key to creating a good user experience is good communication and process, but a little cleverness has been known to make a good experience into a great one.

Tweet
Share
Post
Share
Email
Print

Related Articles

Article by Eleanor Hecks
8 Key Metrics to Measure and Analyze in UX Research
  • The article outlines eight essential metrics for effective UX research, ranging from time on page to social media saturation
  • The author emphasizes the significance of these metrics in enhancing user experience and boosting brand growth.

Share:8 Key Metrics to Measure and Analyze in UX Research
6 min read

Did you know UX Magazine hosts the most popular podcast about conversational AI?

Listen to Invisible Machines

This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and