Most UX designers use qualitative research—typically in the form of usability tests—to guide their decision-making. However, using quantitative data to measure user experience can be a very different proposition. Over the last two years our UX team at Vanguard has developed some tools and techniques to help us use quantitative data effectively. We’ve had some successes, we’ve had some failures, we’ve laughed, we’ve cried, and we’ve developed ten key guidelines that you might find useful.
1. Evaluate your experience against something you care about. Is it meeting its objectives?
If I had a penny for every time I heard someone say, “this page got 10,000 unique visits last month,” I’d have… oh, at least enough for a beer. I typically respond by asking whether more or less hits would be better, or if they knew how many users were actually looking for the page (if 100,000 were looking for it, and only 10,000 found it, is that good?), or if those 10,000 users left the page satisfied, or if their behavior was changed by using the page. Because in order for our measures to be meaningful and effective, they need to be relative to our objectives.
2. The objectives of your experience will likely differ from those of the next person to read this.
Every project involves a unique user base and user demographics and has unique tasks, business models, products, services, and content, so you should expect your objectives to be unique to your experience. This means that you can’t simply copy the objectives (and measures) that someone else is using, or post to a metrics forum asking, “What should I measure?” You need to be prepared to define your own objectives and measures.
3. Measure how well tasks are satisfied by capabilities, not projects. Otherwise, you have no baseline.
There are three key terms in this guideline: tasks, capabilities, and projects. It’s important to understand the relationship between them.
Tasks are either something the user is trying do (e.g., buy an item, find out how much something costs, compare an item to a similar one), or something the business wants the user to do (e.g., buy the item, buy accessories, use credit, refer a friend).
Capabilities are small pieces of our cross-channel experience that satisfy several user- or business-driven tasks (e.g., a webpage that provides information about an item and a way to buy it, a mobile app scans a barcode and provides details on similar items).
Projects are teams of people, temporarily focused on creating or improving one or more capabilities (a team focused on improving the “buy the item” conversion rate).
The critical difference between capabilities and projects is that projects come and go as business priorities change and as they complete their work, whereas capabilities have a much longer lifespan. If you measure at the project level, each project will define their own set of objectives and measures and you will never know over time if your experience is improving or getting worse. Measuring at the capability level provides a stable baseline over time to show the impact of several projects.
Our UX team at Vanguard has developed a technique and deliverable call the “Capability Strategy Sheet” which identifies the tasks that any specific capability is trying to satisfy and the measures that will indicate success or failure (the details of which are too long for this article. To learn more about it check out my 2011 IA Summit presentation.
4. Measuring outcomes can tell you if a capability is failing. Measuring drivers can tell you why.
The tasks (and associated measures) that a single capability is attempting to satisfy fall into two categories.
Outcome tasks and measures represent the thing that the user really wants to do or that the business really wants the user to do (e.g., buy the item).
Driver tasks and measures represent things that the user will do along the way that contribute towards, or can detract from, the desired outcome (e.g., find out how much the item costs).
It’s important to measure both outcomes and drivers because, like much in design, the process of defining the tasks and measures is iterative. You may be satisfying all the drivers perfectly, but might not be obtaining your desired outcome because you missed a driver.
5. Ask: how would the user behave if we nailed the design? How would they behave if we screwed it up?
Defining specific measures to evaluate how well (or poorly) a capability is satisfying a task is one of the most challenging aspects of the work we’ve been doing. We find that asking ourselves these two questions is a great starting point because it helps us focus on the user’s task and their actual behavior relative to that task.
6. Be open about the uses and limitations of data, and involve people early to help gain buy-in.
Using quantitative data to inform decision-making can be uncomfortable for UX professionals more used to trusting their own judgment and experience. Encouraging a culture of openness and transparency where everyone has a voice can help increase the sense of ownership and acceptance of the data. It’s also important to ensure that the data isn’t perceived as infallible; it’s not a replacement for experience and good judgment!
7. Avoid misleading measures; the temptation to use the data is too strong. Ask: what if the result is X?
Once something has been measured, it’s very difficult to resist using the data, so be very sure before measuring that you have a high level of confidence that you will believe in, and act upon, the results. We’ve found that imagining ourselves a little way into the future with specific results of, say, 10% (or whatever number represents a poor result) and then again with results of 80% (or a good result) helps us sanity-check our measure and make sure it’s actionable.
8. Be unbiased. Don’t be afraid to measure things that might contradict your own opinion.
Don’t be too hasty to discard measures, though. Make sure you’re doing it for the right reasons—don’t fall prey to the “even if the result is X, we’re not doing it because I don’t agree with that decision” trap. Having an open mind is critical to maximizing your learning as well as maintaining the credibility of your data.
9. Don’t lose your perspective about how data fits into your decision-making.
In movies, machines have made some pretty bad decisions (Wargames, The Terminator, 2001: A Space Odyssey). In real-life, we need to ensure that humans are not left out of the loop. Only the knowledge and experience that UX professionals bring can effectively balance the quantitative against the qualitative—things like emotional impact, brand values, and aesthetic appeal. Just as important, stakeholders need to understand this balancing act and the role of expertise and experience, otherwise they might start acting on the data themselves.
10. Start small. Pick a capability, identify objectives, define measures, and watch what happens.
This type of measurement and use of data doesn’t have to be a large enterprise-level initiative. In fact, starting small can be the pebble that starts the avalanche. Once stakeholders see the value of a data-informed approach, it becomes desirable across the organization.