Software development is built around quantitative measurements. Measurements such as the time it takes an application to load, the amount of memory used, or the load on the CPU. These measurements are all easy to calculate and are wonderfully quantitative. One of the reasons some organizations tend to discount usability (both in practice and in artifacts like the severity descriptions in bug tracking systems), is an inaccurate view that usability is an amorphous and subjective thing that simply can't be scientifically quantified and measured. That assumption is incorrect.

The usability inspection technique of heuristic evaluation, which was introduced by Jakob Nielsen [2,3,4] has emerged as one of the most common ways for professional UX designers to evaluate the usability of a software application. Heuristic evaluations are extremely useful because they formally quantify the usability of a software application against a set of well defined and irrefutable principles. Usability violations can be quantified individually: either an interface supports undo, or it does not, either an interface is internally consistent, or it is not, etc. Usability violations can also be quantified in aggregate: the software application currently has 731 known usability issues.

Additionally, by establishing a tracking system on a set of agreed upon principles, much of the debate on the level of "there is no right or wrong with UI / every user is entitled to their personal opinion / all that matters is the ability to customize" that is currently found in some development communities can be significantly reduced. Usability heuristics will help ground these debates, just as currently in software development no one argues in favor of data loss, or in defense of crashing.

Injecting User Experience Principles Into a Bug Tracking Tool

Adapting a software development team's bug tracker to capture usability issues defined by a set of specific heuristics can reshape the way developers think about usability. Just as developers currently have a shared vocabulary to describe good and bad with concepts such as performance, data loss, and crashing, usability heuristics can introduce additional concepts, like consistency, jargon, and feedback. All of these concepts, covering both the underlying implementation as well as the user interface, can now have an equal potential to impact the software application at any level of severity, from trivial to critical.

Modifying a bug tracking system to track a heuristic evaluation of software is reasonably straightforward. Each issue needs to be able to be associated with the specific usability heuristic being violated (for example: "using the term POSTDATA in a dialog is technical jargon"). Developers encountering these keywords likely won't have any additional interface design training, so it is important that each heuristic is very clearly defined with specific examples and detailed explanations. Additionally, allowing developers to view all of the bugs in the software marked as the same type of issue, both current and resolved, serves as an effective way for them to further learn about the heuristic.

The Principles

These are the UX principles currently being used by developers working on Firefox and other projects in the Mozilla community. We are currently utilizing Bugzilla's keyword functionality, similar to how current bugs can be flagged as violating implementation level heuristics, like data loss. These principles can be added to any bug tracking that allows bugs to be tagged.

Interfaces should provide feedback about their current status. Users should never wonder what state the system is in. [Source: Nielsen]
Interfaces should not be organized around the underlying implementation and technology in ways that are illogical, or require the user to have access to additional information that is not found in the interface itself. [Source: Nielsen, Cooper]
Users should not be required to understand any form of implementation level terminology. (This principle is a special case of ux-implementation-level). [Source: Nielsen]
Users should always feel like they are in control of their software. (This principle is often the nemesis of ux-interruption, especially in cases where developers assume users want more control than they actually want). [Source: Nielsen]
Actions should support undo so that users remain in control. (This principle is a special case of ux-control).
In general software should be internally consistent with itself, and externally consistent with similar interfaces to leverage the user's existing knowledge. [Source: Nielsen]
Interfaces should proactively try to prevent errors from happening. [Source: Nielsen]
Users should not encounter errors because the interface is in a different state than they expected it to be. (This principle is a special case of ux-error-prevention).
Interfaces should proactively help users recover from both user errors and technology errors. (A preferable case is to address through ux-error-prevention so that the error does not occur). [Source: Nielsen]
Users should be able to discover functionality and information by visually exploring the interface, they should not be forced to recall information from memory. (This is often the nemesis of ux-minimalism since additional visible items diminish the relative visibility of other items being displayed). [Source: Nielsen]
Interfaces should be as efficient as possible, minimizing the complexity of actions and the overall time to complete a task. [Source: Nielsen]
Interfaces should be as simple as possible, both visually and interactively. Interfaces should avoid redundancy. (This principle is often the nemesis of ux-discovery since removing or hiding items deep into the interface forces the user to rely more on memory than recognition). [Source: Nielsen]
Interfaces should not interrupt the user. Interfaces should never ask the user a question that they are not prepared to answer simply for a false sense of ux-control. In general software should only speak when spoken to.
Interfaces should not blame the user, or communicate in a way that is overly negative or dramatic.
Controls should be placed in the correct location relative to the effect that they will have. [Source: Norman]
Controls should visually express how the user should interact with them. [Source: Norman]
Controls that are more important or more commonly used should leverage visual variables such as size and contrast so that they have more dominance and weight relative to other controls. (This principle is an adaption of ux-discovery).

Side Effects of Distributed Heuristic Evaluation

Today the process of heuristic evaluation is normally completed in corporations and academia by a small number of designers, who are extremely well practiced at identifying usability issues. However, it is worth noting two important aspects of the heuristic evaluation method from when it was first introduced:


First, the method of heuristic evaluation has its roots not in the functional purpose of evaluating usability, but rather in the even more basic purpose of teaching usability. We see this in Nielsen's 1989 SIGCHI bulletin: Teaching User Interface Design Based on Usability Engineering [2] that heuristic evaluation was introduced as part of the curriculum for a master's degree in computer science. This is still true today: the road to becoming a good UX designer begins with mastering the identification of well defined heuristics.

Power in Numbers

The second important aspect of heuristic evaluations is that it was quickly found that the number of evaluators played a major role in how successful it was. Nielsen wrote in 1990 that "evaluators were mostly quite bad at doing such heuristic evaluations… they only found between 20 and 51% of the usability problems in the interfaces they evaluated. On the other hand, we could aggregate the evaluations several evaluators to a single evaluation and such aggregates do rather well" [3]. The more people an organization has identifying unique usability problems in a product, the better the product will become, even if your organization already has a very strong UX team.


This practice has the potential to drive UX improvements in organizations that haven't yet fully embraced UX as part of their product development or IT processes. It helps influence and enlist engineers to create better usability outcomes by making usability testing integral to existing testing, quality control, and issue tracking systems. This approach can be a realistic, practical means of infusing better UX into companies because it embraces the way most IT organizations actually work (driven by quality control and testing) and uses tools already integral to their workflow.

Having your software development team adopt a shared vocabulary for UX principles and having them track these principles in the same bug tracking software they use for addressing all other issues can go a long way towards bridging any divide that exists between design and development teams. Adding these keywords to your bug tracking software obviously doesn't remove the need for building a very strong UX team inside of your organization, but it will put them in a position to have more influence over the software you are developing.

  1. Schwartz, D. and Gunn, A. 2009. Integrating user experience into free/libre open source software: CHI 2009 special interest group. CHI EA '09. ACM, New York, NY, 2739-2742.
  2. Nielsen, J. and Molich, R. 1989. Teaching user interface design based on usability engineering. SIGCHI Bull. 21, 1 (Aug. 1989), 45-48.
  3. Nielsen, J. and Molich, R. 1990. Heuristic evaluation of user interfaces. CHI '90. ACM, New York, NY, 249-256.
  4. Nielsen, J. 1994. Enhancing the explanatory power of usability heuristics. CHI '94. ACM, New York, NY, 152-158.
  5. Bugzilla Installation List,


I began posting usability problems to our bug tracking system. This got a lot of results. I find these systems tedious but the effort is worthwhile. This article seems to suggest installing a bug tracking software for exclusive use with usability issues. I think it is much better to use the existing tracking software that an engineering team uses. Usability and design teams often have trouble getting their ideas implemented in part due to working with different tools and systems than the engineering and support teams. I think this approach works because the issues are part of the larger list that the engineers work from and so they aren't lost and have to be addressed. The engineers also don't want to have to keep track of other lists or e-mail. Management assesses a project by the number of defects and if there are usability defects, they are still defects that get attention from management. This approach isn't fool proof. Usability defects can be assigned a low priority or closed but that decision is a formal one that requires consideration.

Another benefit of using the engineers bug tracking software is that usability issues stay alive over a lot longer period. They can be categorized into the same categories that engineers use and can be searched and so associated with related engineering issues. In the past, I have often been disappointed with heuristic evaluations. I've done them many times and the results seem so promising but then the issues aren't addressed. I think there are a number of reasons for that - a main reason being that many of the issues don't pertain to the current projects that engineering is working on. Engineers are even annoyed that the usability guy is bringing up those issues when they are so off subject. Subsequently the issues get forgotten and so even if a project comes up that would fit, everyone has forgotten that heuristic evaluation. But, if the issues are in the bug tracking system, there is a lot higher chance that they will be remembered.

I agree with you Michael. This is an excellent list. I think that those that use or are familiar with a dynamics crm system can take this list to heart. I think that good management is always the winner in a company. It starts with management and ends with customer service.

This is an excellent list of guiding UX principles. Too often our CRM managers have their time absorbed dealing with customer service issues that could have been resolved had only our reps used jargon-less communications with the customers. I suppose this is the differentiation between reps and managers, but there is no reason we shouldn't strive to make it less frequent a problem. I'm going to copy this list and share it with every rep on our SharePoint team and hopefully it will sink in.

Thanks a lot for giving the list of works about this topic. I've also read one book from which I've really learned a lot. Here is the torrent of it . I think it will be interesting for people who'd like to understand Heuristic approach better.

The reason behind listing large numbers of principals in a blog entry is not to get you to try and build that list and do it every time you are working with UX. Instead, we as bloggers build these lists to try and begin infiltrating your head with good practices. If you can take two points from this post and combine it with two points from another, all of a sudden we are helping you grow. This is not my blog, but that's how I know we feel as a community of bloggers.

Sandoval | Blog Editor | Braun Corporation

I agree with the first poster, timeyeo. 17 principles are a little bit ridiculous to expect to be practiced throughout the entire field. It does make it harder to pinpoint and therefore classify whatever issues may or may not arise. It also makes both teaching and learning more difficult. I do believe that a lot of these principles can be put into segments, or larger categories, to help them be understood as well as retained. I enjoyed the article greatly. Would love to hear thoughts on measurability of success, though!

this is great for me as a UX designer to evaluate alternatives and clearly categorize benefits. As an addition to the bug tracker, 17 categories works for me on a drop down list. I don't think anyone not UI related would read past the general "UI/UX" bug category,so most people except me won't care most of the time about which of the 17 subcategories applies to this UI/UX bug traker entry. what I want to use it for is internal TQM analysis and find out what categories are consistently ommited from most of the web site I'm working on. In general I can say the site could be enhanced with "ux-minimalism" to my IT head and then point out examples if needed to support my claim or make ROI based arguments for why "UX-minimalism" should be improved and what expected measurable results such as faster reading time of each page and increase in conversions or 3% less bounces by implementing "UX-minimalism" strategies on the home page at a cost of 5 hours work. etc.

I appreciate any effort in furthering our field, but I see several issues with what's proposed in this article:

1. 17 principles are a tad too many
- WIthin a group of experts, communicating fluently using these 17 principles would be challenging, but doable. Communicating these 17 principles with non-UX, non-tech members of the team: near impossible; even if you made it through all 17, they've already forgotten most of the earlier ones.
- 17 principles also creates a lot of pain classifying the issues discovered. Classifying the issues now becomes work in and of itself.

2. How measurable are these principles?
- Let's take minimalism, for example. To measure something accurately, what and how we measure it has to leave nothing to interpretation. A litre of water is a litre of water. A kilo of rice is a kilo of rice. A minimal interface is......wha?

Yes, these principles create a vocabulary that we can use to describe and talk about the issues we discover, which is important. However, I feel we need to narrow down the set to a smaller group of key measurable attributes. Attributes may differ across mediums, but the outcome should be the same: looking at the measured outcome of the quantified evaluation, we should get a sense of what the product evaluated is doing well and where we should improve.