UX research is a strange thing, being divided into “generative” and “evaluative” studies, the former conducted ideally early in product development, the latter further along in the process. This has never made sense to me. In what other kind of research would we endeavor to test concepts without learning new things about the problem we were studying?
I don’t think this distinction serves us well as researchers or designers. Every single study we do should generate new insights about the people we study and allow us to refine our understanding of the problems our products are trying to solve. Just as products are constantly evolving, so must our understanding of the people we build for and their needs and goals. I only want evaluative research if it is generative. Making evaluative research generative does require effort, thinking, and preparation, as well as spending time talking with your audience of focus during sessions. It does not often require extra time commitments in the form of weeks or days, and when it does those costs are often recouped in the form of less frequent testing and decisions made with more robust data and higher levels of confidence.
In addition, depending on the UX maturity of the organization, it is rare for UX research to be involved early enough in the process to conduct classically generative studies at the beginning of product development, but that does not alleviate the need to intimately understand who we are building for and the problems we are solving, and to guide our iterations with growing insights. Depending on when the insights come into the development process, they may be too late to shift the plans for the next release, but that doesn’t mean the effort is wasted. Software is never done, and people change much more slowly than technology, meaning that generative insights are always an investment in the future of our organizations, allowing us to become progressively better at iterating and building over time. Collapsing the distinction between evaluative and generative studies also offers a way to meet the organization’s insatiable desire for evaluative research while at the same time gathering the foundational and generative insights we need to make robust decisions and build coherent product strategies.
Here are some ways to expand requests for evaluative research into research with generative insights.
When we (the researchers) gather questions from our design, product, and engineering partners, we are likely to hear questions about our products such as “do people like it”, “do people understand it” and “what do people think of our new interface?” We regularly get asked these types of questions on my team. To make these evaluative questions generative, I rewrite them to open them up and make them questions about the person rather than the product we want to know about, as a design is only as useful as how well it aligns with the needs of the person using it.
For example: “do people understand it, do they like it” — → how are people accomplishing this task today? Why are they doing it? What is important about it for them? Who are they doing it for? Does this product/feature service enable them to accomplish their goals better/quicker/easier? What parts of this task do people enjoy? Why? Is what we are providing more or less enjoyable than their current experience? How?
I like to capture these questions in my research brief, often alongside the product-focused questions from the team so that I am making sure my partners and stakeholders feel heard and confident their research needs will be addressed.
Every single qualitative research session is a gift of someone’s time and attention. It is our job as researchers to treat that gift with respect and make the most of it. To me, every single time I sit down with a user for a research session is first of all an opportunity to learn about the person in front of me, and second, an opportunity to learn more about how they interact with our concepts. This may sound like some really touchy-feely shit that has no place in tech, but valuing people’s time in this way leads to robust business-leading insights and also makes your participant feel safe talking to you and giving you negative or critical feedback, which is really important to getting trustworthy insights.
- I always begin by asking “casual” questions to build rapport. Since I study people at work I like to ask them about what their job is, how long they’ve been there, and what drew them to it. This helps me start to build a model in my head of who they are and what they value about their work.
- I may also ask them about their favorite and least favorite parts of their job, to walk me through a typical day, and about all the strategies they use to accomplish their goals, whether that is our product, a competitors product, or other strategies such as pen and paper, phone calls, etc. This allows me to understand the context in which they encounter our product or service, opens up opportunities to learn about our competitors in the space and probe on why they use certain tools, and even sometimes identify opportunities for green-field solutions to problems that no solutions currently exist for.
- Once I have learned about their key tasks and needs, I try to orient the concept test around them. For example: “you said you need to share your work with your coworker for review, could you use this new concept to do that for me?” I’ll often encourage people to think out loud during this process.
- I love to ask stupid, stupid questions when people react to a concept. For example:
- Participant: “This is great”
- Me: “Oh yeah? Tell me more about that.”
- Participant: “It’s great because it lets me go faster.
- Me: “What’s helpful about going fast?
- Participant: Well, we’re always working 2 steps behind our developers so everything in my job is always needed yesterday.
- Me: How do you manage that today/ how do you feel about that?”
- Asking these kinds of very simple questions often leads to unexpected insights that help you build strategies around the pervasive pressures facing your audience of focus.
The first thing I do to make sure my evaluative research is also generative is to not just describe people’s reaction to a concept or prototype, but use observation and all the “touchy feely” conversations I had with my participants to identify the reasons WHY people are behaving the ways they do. It may sound obvious, but people with the same role or set of needs often have very different motivations and triggers for delight. For example, rather than saying “3 out of 5 participants could not successfully complete the task” I want to be able to identify things that the people who succeeded or failed had in common that shaped their reactions to the concept.
I am a hoarder. I keep every recording, every note, and every draft of my research reports. I am also painfully militant about this for my team. Deleting old recordings (which I do regularly for data handling reasons) is almost physically painful. Remember, each hour of qualitative research is a precious opportunity to capture the complexity of the problems we are trying to solve for people and the needs they are trying to meet with our tools. Even if the team is only interested in the immediate research findings about success or failure of a concept, I can leverage this information as needed to build personas or archetypes to inform our thinking and share it with designers working on future versions of related concepts.
The next level of value I get from research sessions is meta analyses. In my experience, there is almost nothing as powerful as a meta-analysis that draws from multiple qualitative research studies. They can allow researchers to pull out robust insights and themes for ongoing and relating projects. When teams are in disagreement and have divergent research, synthesizing it together can show common ground and make a shared foundation for alignment. As mentioned earlier, years of research with hundreds of participants hold up to scrutiny much better than one study that represents a snapshot in time of 5–10 people.
On the flip side, analysis of multiple studies is a great way to identify gaps in our understanding and future research questions. Meta analyses can reveal research flaws and limitations. For example: “We thought we had spoken to 100 people, but 5 research teams not talking to each other all spoke to the same 20 people from our Beta program. Ooops.” We may find we cannot draw robust conclusions from past research because the documentation did not include information about the participants, their problems, and their context. Learning what questions we cannot answer is nearly as important as answering the ones we can for informing our thinking.
A note on quantitative evaluative research:
While quantitative evaluative methods such as surveys, rating scales, and a-b tests are often thought of as purely evaluative and deployed as quick experimental testing without the cost of qualitative research, they also come with a cost. Often the costs of this type of evaluative testing are in the currencies of attention, enthusiasm, candor, or privacy. In my experience, they also can often lead to generative insights and questions, but only if (at a minimum) the findings are shared with design and research teams, which is too rarely the case.
A note on why I don’t see a lot of value in rapid qualitative testing methods:
- They test such limited pieces of work that the learning is thin and the takeaways disposable, which leads to having to repeat them with every iteration and falling into a cycle of never having time to do generative work.
- Even if someone fits a profile of the type of person we want to talk to, they may not have the needs we are trying to meet with our product. We need to make sure our participants actually care about the problems our product or service is solving or we run the risk of getting misleading or inaccurate information.
It may seem like an outrageous or costly proposition that EVERY study should be generative. Wouldn’t that create a chaotic scenario of constant pivots with every new insights? No.
Rather, treating every study as generative is a great way to build a robust roadmap based on high quality insights, honed over time and refined with repetition. Qualitative research is already a hard sell in tech companies enamored by numbers, and small studies with 5 people can often be dismissed as directional rather than declarative. However, when insights are gleaned, expanded, and refined with hundreds of participants over the course of many studies they are much harder to devalue and can give us a higher level of confidence in our findings and interpretations.
Shifting our lens from testing our products to centering the needs of people allows us to never stop learning and generating new ideas, insights, and innovation. All research should be generative and ready to inform our next release and the one we have not start planning yet.