Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Home ›› Design ›› Five UX Research Pitfalls

Five UX Research Pitfalls

by Elaine Wherry
12 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

Avoiding the growing pains and pitfalls associated with becoming a user-driven company.

In the last few years, more and more organizations have come to view UX design as a key contributor to successful products, connecting teams with end-users and guiding product innovation within the organization. Though it’s fantastic to see this transition happen, there are growing pains associated with becoming a user-driven organization. These are the pitfalls that I see organizations grappling with most often.

Pitfall 1: It’s easier to evaluate a completed, pixel-perfect product so new products don’t get vetted or tested until they’re nearly out the door.

Months into a development cycle and just days before the release date, you realize that the UI has serious flaws or missing logic. If you’re lucky, there is enough flexibility in the schedule to allow grumbling engineers to re-architect the product. More likely, though, the PM will push to meet the original deadline with the intent to fix the UI issues later. However, “later” rarely happens. Regardless, everyone wonders: how could these issues have been caught earlier?

The UI is typically built after the essential architectural elements are in place, and it can be hard to test unreleased products with users until the very last moment. However, you can gather feedback early in the process:

  • Don’t describe the product and ask users if they would use it. In this case, you are more likely testing your sales pitch rather than the idea itself. If you ask users if they want a new feature, 90% of the time they’ll say yes.
  • Test with the users you want, not the users you already have. If you want to grow your audience with a new product, you should recruit users outside your current community.
  • Validate that the problem you are solving actually exists. Early in the design cycle, find your future users and research whether your product will solve their real-world problems. Look for places where users are overcoming a problem via work-around solutions (e.g., emailing links to themselves to keep an archive of favorite sites) or other ineffective practices (e.g., storing credentials in a text file because they can’t remember their online usernames and passwords).
  • Verify your mental models. Make sure that the way you think about the product is the same as your user. For instance, if you’ve been pitching your product idea to your coworkers as “conversational email” but your actual users are teenagers who primarily use text messaging, then your email metaphor probably won’t translate to your younger users. Even if you don’t intend to say “conversational email” in your product, you will unconsciously make subtle design choices that will limit your product’s success until you find a mental model that fits that of your users, not of your coworkers.
  • Prototype early. Create and test a Flash-based or patched-together prototype internally as soon as possible. Even if your prototype doesn’t resemble a finished product, you’ll uncover and develop confidence in the major issues to wrestle down in the design process. You’ll also have an easier time seeing the areas of the product that need animations, on-the-fly changes, or other issues that require significant engineering time that weren’t recognized in the project scope because the product was only explored in wireframes and design specs.
  • Plan through v2. If you intend to launch a product with minimal vetting or testing, make sure you’ve written down and talked about what you intend for the subsequent version. One of the downsides of the “release early, release often” philosophy is that it’s easy to get distracted or discouraged if your beta product doesn’t immediately succeed. Or upon launch you might find your users pulling you in a direction you hadn’t intended because the product wasn’t fully fleshed out, or dealing with weeks of bug-fixing and losing sight of the big picture. Once the first version is out the door, keep your team focused on the big picture and dedicated to that second version.

Pitfall 2: Users click on things that are different, not always things they like. Curious trial users will skew the usage statistics for a new feature.

Upon adding a “Join now!” button to your site, you cheer when you see an unprecedented 35% click-through rate. Weeks later, registration rates are abysmal and you have to reset expectations with crestfallen teams. So you experiment with the appearance of your “Join now!” button by changing its color from orange to green, and your click rates shoot up again. But a few days later, your green button is again performing at an all-time low.

It’s easy for an initial number spike to obscure a serious issue. Launching a new feature into an existing product is especially nerve-wracking because you only have one chance to make a good first impression. If your users don’t like it the first time, they likely won’t try it again and you’ve squandered your best opportunity. Continuously making changes to artificially boost numbers leads to feature-blindness and distrustful users. Given all of this, how and when can you determine if a product is successful?

  • Instrument the entire product flow. Don’t log just one number. If you’re adding a new feature, you most likely want to know at least three stats: 1) what percentage of your users click on the feature, 2) what percentage complete the action, and 3) what percentage repeat the action again on a different day. By logging the smaller steps in your product flow, you can trace the usage statistics within all of these points to look for significant drop-offs.
  • Test in sub-communities. If you are launching a significant new feature, launch the feature in another country or in a small bucket and monitor your stats before launching more widely.
  • Dark-launch features. If you are worried that your feature could impact site performance, launch the feature silently without any visible UI and look for changes in uniques, visit times, or reports of users complaining about a slow site. You’ll minimize the number of issues you might have to potentially debug upon the actual launch.
  • Anticipate a rest period. Don’t promise statistics the day after a release. You’ll most likely want to see a week of usage before your numbers begin leveling.
  • Test the discoverability of your real estate. Most pieces of your UI will have certain natural discoverability rates. For instance, consider adding a new temporarily link to your menu header for a very small percentage of your users just to understand the discoverability rates for different parts of your UI. You can use these numbers as a baseline for evaluating future features.

Pitfall 3: Users give you conflicting feedback.

You are running a usability study and evaluating whether users prefer to delete album pictures using a delete keystroke, a remove button, a drag-to-trash gesture, or a right-click context menu. After testing a dozen participants, your results are split among all four potential solutions. Maybe you should just recommend implementing all of them?

It’s unrealistic to expect users to understand the full context of our design decisions. A user might suggest adding “Apply” and “Save” buttons to a font preference dialog. However, you might know that an instant-effect dialog where the settings are applied immediately without clicking a button or dismissing the dialog makes for an easier, more effective user experience. With user research, it’s temptingly easy to create surveys or design our experiments so study participants simply vote on what they perceive as the right solution. However, the user is giving you data, not an expert opinion. If you take user feedback at face value, you typically end up with a split vote and little data to make an informed decision.

  • Ask why. Asking users for their preference is not nearly as informative as asking users why they have a preference. Perhaps they are basing their opinion upon a real-world situation that you don’t think is applicable to the majority of your users (e.g., “I like this new mouse preference option because I live next to a train track and my mouse shakes and wakes up my screen saver”).
  • Develop your organization’s sense of UI values. Know which UI paradigms (e.g. Mac vs. Windows, Web vs. desktop, etc) and UI values (e.g. strong defaults or lots of customization, transparency or progressive disclosure) your team values. When you need to decipher conflicting data, you’ll have this list for guidance.
  • Make a judgment call. It’s not often helpful to users to have multiple forms of the same UI. In most cases it adds ambiguity or compensates for a poorly designed UI. When the user feedback is conflicting, you have to make a judgment call based upon what you know about the product and what you think makes sense for the user. Only in rare cases will all users have the same feedback or opinion in a research study. Making intelligent recommendations based upon conflicting data is what you are paid to do.
  • Don’t aim for the middle ground. If you have a legitimate case for building multiple implementations of the same UI (e.g., language differences, accessibility, corporate vs. consumer backgrounds, etc.), don’t fabricate a hodgepodge persona (”Everyone speaks a little bit of English!”). Instead, do your best to dynamically detect the type of user situation upfront, automate your UI for that user, and offer your user an easy way to switch.

Pitfall 4: Any data is better than no data, right?

You are debating whether to put a search box at the top or the bottom of a content section. While talking about the issue over lunch, your business development buddy suggests that you try making the top search box “Search across the Web” and the bottom search box “Search this article” to compare the results between the two. You can’t seem to place your finger on why this idea seems fishy though you can see why this would be more efficient than getting your rusty A/B testing system up and running again. Sensing your skepticism, your teammate adds, “I know it’s not perfect, but we’ll learn something about search boxes, right? I don’t see a reason not to put it in the next release if it’s easy?”

The human mind’s ability to fabricate stories to fill in the gaps in one’s knowledge is absolutely astounding. Given two or three data points, our minds can construct an alternate reality in which all of those data points make flawless sense. Whether it’s an A/B test, a usability study, or a survey, if your exploration provides limited or skewed results, you’ll most likely end up in a meeting room discussing everyone’s different interpretations of the data. This meeting won’t be productive and you’ll either agree with the most persuasive viewpoint or you’ll realize that you need a follow-up study to reconcile the potential interpretations of your study.

  • Push for requirements. When talking with your colleagues, try to figure out what you are trying to learn. What is the success metric you’re looking for? What will the numbers actually tell you? What are the different scenarios? This will help you determine the study you should run while also anticipating future interpretations of the data before running the study (e.g., if the top search bar performs better, did you learn that the top placement is better or just that users look for site search in the upper left area of a page?).
  • Recognize when a proposed solution is actually a problem statement. Sometimes someone will propose an idea that doesn’t seem to make sense. While your initial reaction may be to be defensive or to point out the flaws in the proposed A/B study, you should consider that your buddy is responding to something outside your view and that you don’t have all of the data. In this scenario, perhaps your teammate is proposing running the search box study because he has a meeting early next week and needs to work on a quicker timeline. From his perspective, he’s being polite by leading with a suggestion without realizing that you don’t have the context for his suggestion. However, after pushing him for what problem the above study will resolve, you can also help him think through alternative ways of getting the data he needs faster.
  • Avoid using UX to resolve debates. UX might seem like a fantastic way to avoid personal confrontation (especially with managers and execs!). After all, it’s far easier to debate UX results rather than personal viewpoints. However, data is rarely as definitive as we’d like. Conducting needless studies runs the risk of slowing down your execution speed and perhaps leaving deeper philosophical issues unresolved that will probably resurface again. Sometimes we agree to a study because we aren’t thinking fast enough to weigh the pros and cons of the approach, and it seems easier to simply agree. However, you do have the option of occasionally saying, “You’ve raised some really good points. I’d like to spend a few hours researching this issue more before we commit to this study. Can we talk in a few hours?” And when you do ask for this time, be absolutely certain to proactively follow-up with some alternative proposals or questions, not just reasons why you think it won’t work. You should approach your next conversation with, “I think we can apply previous research to this problem,” or “Thinking about this more, I realized I didn’t understand why it was strategically important to focus on this branding element. Can you walk me through your thinking?” or “After today’s conversation, I realized that we were both trying to decrease churn but in different ways. If we do this study, I think we’re going to be overlooking the more serious issue, which is…”

Pitfall 5: By human nature, you trust the numbers going in the right direction and distrust the numbers going in the wrong direction.

Hours after a release, you hear the PM shout, “Look! Our error rates just decreased from .5% to .0001%. Way to go engineering team! Huh, but our registration numbers are down. Are we sure we’re logging that right?”

Even with well-maintained scripts, the most talented stats team, and the best intentions, your usage statistics will never be 100% accurate. Because double-checking every number is unrealistic, you naturally tend to optimize along two paths: 1) distrust the numbers that are going in the wrong direction and, more dangerously, 2) trust the numbers that are heading in the right direction. To make matters worse, data logging is amazingly error-prone. If you spot a significant change in a newly introduced user activity metric, nine times out of ten it’s due to a bug rather than a meaningful behavior. As a result, five minutes of logging can result in five days of data analyzing, fixing, and verifying.

  • Hold off on the champagne. Everyone wants to be the first to relay good news so it’s hard to resist saying, “We’re still verifying things and it’s really early, but I think registration numbers went up ten-fold in the last release!” Train yourself to be skeptical and to sanity-check the good news and the bad news.
  • QA your logging numbers. Data logging typically gets inserted when the code is about to be frozen. Since data logging shouldn’t interfere with the user experience, it tends not to be tested. Write test cases for your important data logging numbers and include testing them in the QA process.
  • Establish a crisp data vocabulary. Engagement, activity, and session can mean entirely different things between teams. Make sure that your data gatekeeper has made it clear how numbers are calculated on your dashboards to help avoid false alarms or overlooked issues.

One of the main tenets of user research is to constantly test the assumptions that we develop from working on a product on a daily basis. It takes time to develop the skills to know how to apply our UX techniques, when our professional expertise should trump the user’s voice, or when to distrust user data. As a researcher, you are trained to keep an open mind and to keep asking questions until you understand the user’s entire mental picture. However, it’s that same open-mindedness and willingness to understand the user’s perspective that makes it easy to assume that because their perspective can make sense, that it should also justify changes within our product design. Or, because we are so comfortable with a particular type of UX research, we tend to over-apply it to our team’s questions.

While by no means a complete list, I hope these five pitfalls from my personal experience will be relevant to your professional lives and perhaps, provide some food for thought as we all strive to become better researchers and designers.

post authorElaine Wherry

Elaine Wherry
Elaine Wherry is Co-founder and VP of Products at Meebo and oversees Meebo's Web, User Experience, and Product Management teams. She takes a special interest in finding passionate folks who want to build amazing products that people closer together across the Web: https://.meebo.com/jobs/openings/. You can find her personal blog at https://.ewherry.com/.

Tweet
Share
Post
Share
Email
Print

Related Articles

AI is reshaping the role of designers, shifting them from creators to curators. This article explores how AI tools are changing design workflows, allowing designers to focus more on strategy and user experience. Discover how this shift is revolutionizing the design process and the future of creative work.

Article by Andy Budd
The Future of Design: How AI Is Shifting Designers from Makers to Curators
  • This article examines how AI is transforming the role of designers, shifting them from creators to curators.
  • It explores how AI tools are enhancing design processes by automating routine tasks, allowing designers to focus on strategic decision-making and curating user experiences.
  • The piece highlights the growing importance of creativity in managing AI-driven systems and fostering collaboration across teams, ultimately reshaping the future of design work.
Share:The Future of Design: How AI Is Shifting Designers from Makers to Curators
5 min read

Join the UX Magazine community!

Stay informed with exclusive content on the intersection of UX, AI agents, and agentic automation—essential reading for future-focused professionals.

Hello!

You're officially a member of the UX Magazine Community.
We're excited to have you with us!

Thank you!

To begin viewing member content, please verify your email.

Tell us about you. Enroll in the course.

    This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and