Article No :686 | June 13, 2011 | by Cathleen Zapata
Efficiency, or the amount of time an expert user takes to complete a task, can have a direct impact on the bottom line. A few seconds off a call center task can save hundreds of thousands of dollars in a year. However, what is the impact of efficiency on consumers?
Measuring Task Efficiency
CogTool is a software product developed by Dr. Bonnie John’s lab at the Human Computer Interaction Institute of Carnegie Mellon University. CogTool evaluates a design with a predictive human performance model to predict the time it will take for an expert to complete a task. CogTool’s modeling procedure is based on the Keystroke-Level Model (KLM), which assigns time values to each physical and cognitive action. Actions may include reading text on a screen, moving the mouse, moving the hand from the mouse to the keyboard, and typing.
Example in Use
CogTool can be used to evaluate and enhance a form application process, checkout flow, or software UI where speed to completion is essential to the bottom line. For example, Metrics Marketing Group’s Research & Customer Experience team conducted a study utilizing CogTool to evaluate and compare the account opening process amongst 13 top financial institutions. The goal of this study was to measure the task time of opening a checking account online to benchmark this process and use insights obtained to optimize this common process on most banks’ websites.
Through this study, Metrics learned that the average online checking account application completion time was 6 minutes, 49 seconds, with the shortest being 4 minutes, 37 seconds and the longest being over 10 minutes. As this evaluation is based on the prediction of an “expert” user, this finding raised a concern. The reality of the lack of efficiency of opening a checking account online for non-expert users may be increased substantially. In fact, research shows that when performing tasks online, the slowest 25% of users take 2.4 times longer as the fastest 25%.[*]
In addition to utilizing CogTool to measure task efficiency, this study also included measuring the total number of questions to complete the application. The results of this measurement concluded that the average number of total application questions was 42, with 33 of them required, on average.
Bank of America
Table 1: Predicted times shown for a user to complete an online checking account application.
The following chart demonstrates efficiency excluding time spent reading disclosures. This was important to review as previous research indicates that users do not typically fully read disclosures. In addition, almost 80% of users scan the page instead of reading word-for-word online. (Source: Sun)
Fifth Third Bank
Bank of America
Table 2: Predicted times shown without the task of reading disclosures.
The efficiency of performing the task of opening a checking account online is only one obstacle to overcome. The UX team at Metrics has utilized survey research, usability testing, and numerous other methods to uncover common UX issues with this process. Some of the key findings through other research means includes common use of jargon, users’ inability to know what to do next, not seeing and not knowing how to correct site errors, issues with site credibility and security, and too many questions causing user frustration, site abandonment and inability to complete, just to highlight a few.
Additional Points of Measurement
Incorporating multiple points of measurement to evaluate the user experience often yields the best results. According to Forrester Research, 58% of visitors who experience usability problems on a website never return. In addition, the average website can double its conversion rate by designing for increased usability. (Source: IBM)
Evaluating efficiency and task times is a helpful way to benchmark, identify elements of a task that take longer than others, orcompare design ideas. For interfaces where efficiency is the most vital evaluation metric, CogTool can be a very helpful tool. CogTool provides just one point of measurement to evaluate a site’s usability in order to improve conversion and the overall customer experience. However, it is important to keep in mind that efficiency testing only goes so far.
It does NOT tell us:
- The “Whys” of User Behavior – We might know how long it will take an expert to complete a task, but we do not know why it takes actual site users longer. We do not know why a user might be unwilling to complete the application.
- Potential Risks in Design – We are unable to identify places where users will be confused, unwilling or unable to convert, or have a negative experience.
- Actual User Expectations and Needs – Task times do not provide insight into what actual users want to do, what they expect, what their goals are, how they go about selecting a product in the first place, or what overall process this interface fits into. For example, we can answer the question of how long it would take an expert user to complete the online checking account application, but not if that expert would even want to complete it in the first place.
- Non-Expert User Performance – CogTool and the KLM methodology are designed to measure expert use. However, for most interfaces we interact with—whether it’s a website or a DVR programming screen—actual users often do not use it frequently enough to become an expert.
Utilizing other user research methods such as usability testing, heuristic evaluations, user personas, card sorting, focus groups, and surveys are essential to fully evaluate the user experience and maximize conversion.
To comprehensively measure and benchmark, additional points of measurement to consider include:
- Task Completion Success / Failure
- Time to Complete Task (of actual system users)
- Steps to Complete Task
- Frustration / Delight
- Number of Errors
- Severity of Errors
- System Usability Scale (SUS) Score
- Eye Tracking Measurements, such as:
- Gaze Sequence
- Hit Rate
- Dwell Time
- Subjective Satisfaction
Use points of measurements based upon the overall goals of the interface, as well as areas of the user experience that specifically need to be benchmarked and measured.