As many product managers are aware, two aspects of product development must be understood to ensure a product’s success: data points are stronger than opinions, and scalable experiments are the only way to ensure the continuous collection of usable user data. Before experiments can be concretely defined and scaled, however, product managers must figure out what types of experiments to run to validate initial demand.
There is no shortage of methods for gathering user data that can generate all sorts of qualitative and quantitative insight throughout the product development lifecycle. However, many of the available techniques were designed by market research firms and may not be optimal for product managers.
This is especially true in the context of demand validation, which my colleague Steve Cohn of Validately defines as the bridge between the “problem space” and the “solution space.” Simply put, it is the process of going from 0 to 1—beginning with nothing and then identifying and crafting a solution for which there is at least a single user. It is not about scale or about market size, nor is it primarily about gauging user experience or profitability for a completed product, which is the purview of most market research firms.
As such, demand validation experiments have a unique set of requirements and best practices. For example, usability testing would be a silly first step if you have yet to validate the demand for a product concept. You would simply waste time optimizing a user experience no user wants.
So what criteria or requirements must you first consider when building demand validation experiments? Before crafting the experiments themselves, take time to evaluate the impact of the three overarching factors listed below.
Every product manager in the world is familiar with the notion that users don’t know what they want in a product. I’ve found that this is not only true, but that users aren’t particularly good at giving feedback either; there is often a large divide between what people say and what they actually do. As Cohn often says, “There is a large divide between what people say and what they actually do.”
To supplement tenuous and sometimes unreliable feedback, product managers will often rely on past behavior as a predictor of future behavior. But there is another way to gauge product value and need, and it’s rooted in real-world economics: the financial sacrifice (cost) a user is willing to make for a product.
In the real world, every product is pitted against alternatives that can be purchased for the same price. If users choose to pay for your product before or instead of others, then it serves as a confirmation of product value. This financial sacrifice or cost can be coupled with voluntary feedback to help validate demand.
But how do you determine sacrifice without an existing product? One of the most obvious ways is to ask users if they would pay for it right now.
Alternately, Cohn says that sacrifice can take the form of time or reputation. If a user is willing to take the time to give you continuous feedback, it is a sacrifice they are willing to make because they demand your product. That demand is also evident if they vouch for it or recommend it to colleagues.
All of your validation experiments must involve some element of user sacrifice if they are to serve as valuable and reliable feedback.
I’ve written extensively about the benefits of multivariate split testing. For a variety of reasons, it can be difficult to ascertain why a product concept resonates well or is invalidated. Split testing helps you zero in on a particular design feature or function of your product concept to measure differences in value perceptions as those features/functions change.
For example, you could split test different registration flows, first time user experiences, and search processes to validate or invalidate particular manifestations of a product concept.
With little additional effort, you can test multiple opportunities simultaneously to generate substantially more insight. This is the case for nearly every type of demand validation experiment you run, as described below.
It’s hardly worthwhile to go through the challenge of championing and transitioning a business unit to a new experiment-based demand validation process without being able to execute that process seamlessly.
Customer development experts like Mike Fishbein have written about the challenges and solutions for going from one experiment to continuous experimentation.
According to Fishbein, one of the most important aspects of scaling is coming up with a repeatable process for evaluating test results. At Alpha UX, we combine a set of metrics and variables into what we call a “perception of value.” We use the same criteria from test to test to evaluate results and validate or invalidate hypotheses. Coming up with your own evaluation criteria is necessary for successful scaling.
Demand Validation Experiments
Below, I have outlined the playbook for developing six demand validation experiments in three separate phases: ideation, validation, and optimization. It’s important to note that there is no “best” set of experiments, and that all experiments need to be crafted and optimized to meet the needs of individual business units.
Experimentation should never be a replacement for listening to your customers and monitoring your market. Building an organization that focuses on the user requires a lot more than a few user experience tests. To that end, ideation demands a strong grasp on target markets and users. When this is in place, you can easily refine product conceptualization and development based on concrete user need. Below are two experiments you can use to help determine that need.
Experiment 1: In-person interviews
Bring in target users for whom you believe the product need is most acute and conduct in-person interviews. I stress the importance of in-person interviews over remote or automated interviews because you simply can’t prepare for everything a user may say. Often, you’ll want to ask a follow up question, and in-person interviews afford you the opportunity of getting complete feedback. Just 10 half-hour interviews can really help clarify your assumptions and lead you to the next experiment.
Also, be careful not to ask leading questions or bias your interviews in any way. You want user feedback to be genuine and authentic in order to accurately direct product development.
Experiment 2: Automated objective-based interviews
To validate hypotheses generated after in-person interviews, we often run automated experiments that ask users to reach different objectives. For example, if we’re evaluating a presumed need for an online food ordering website, we’ll run an online test and ask 50 users to order food online. Tools like Validately can record the user’s screen and voice while they navigate from objective to objective.
The results from automated objective-based interviews help us to validate, invalidate, or revise assumptions and hypotheses about pain points in product development. If we’ve reached our predetermined criteria, we can move ahead to other experiments.
As mentioned above, validation experiments differ from other types of UX tests. Generally, you aren’t making a decision between two versions or directions, but are deciding whether to move forward with additional experiments or stop altogether. The goal is to find any indication of demand.
Below are a few of the minimum viable experiments we run to evaluate potential solutions.
Experiment 3: Explainer
An explainer is a quick and easy way to convey a value proposition to prospective users. An explainer video, for example, illustrates what your product does (or would do) and why people should use it. It can be as simple as a 90-second animation with a call-to-action at the end. Dropbox is a classic example of a company that built up interest exclusively around an explainer video.
An explainer is the simplest way to get feedback on a product concept, though the feedback you receive will be limited. It can be a good first step, but to generate more robust feedback, you’ll likely need one of the following two experiments.
Experiment 4: Interactive prototypes
As mentioned above, an explainer video often doesn’t do a product concept justice, nor does it offer much of an opportunity for specific feedback. In such cases, you may want to demonstrate the interactivity and flow of a product experience. But it can be cost-prohibitive to build such a product, no matter how simple, which is why I advocate for experiments in the first place and, in this case, the creation of a product prototype.
There are numerous tools with which designers or product managers can build impressive, interactive prototypes to convey a value proposition and generate actionable insight. Consider some of these in later stages of demand validation.
Experiment 5: Service-as-a-Software or pre-selling
The optimal, although not always practical, demand validation experiment is to either pre-sell a product before it exists, or operate what I call a Service-as-a-Software (similarly known as a Wizard of Oz or Concierge MVP). This is when you build the presence or shell of a digital product, but are actually operating the software manually. Zappos is an example of a company that launched this way.
A Service-as-a-Software experiment requires a low investment (from an engineering perspective) for generating a ton of feedback. User responses regarding an existing and usable solution are going to be more accurate than feedback from a potential solution, but there are a number of limitations that may prevent you from running such an experiment. The point is that feedback becomes more reliable as the product becomes more real.
While the demand validation process can be used for feature prioritization and general product roadmaps in Cohn’s “0 to 1” scenario, going from 1 to 2 requires another level of statistical confidence. As such, I’ll conclude this playbook with a type of experiment we often run before a product is officially launched, but after demand has been qualified.
Experiment 6: Usability tests
Demand validation is not about filling out the specs of a product to pass onto engineers. It is about substantiating hypotheses about customers and solutions in broad strokes from a product and feature-level perspective.
Usability testing, on the other hand, is a standardized and technical solution with established best practices that experience designers can utilize to test individual design components. If demand validation addresses the “what” and “why,” usability testing examines the “how.”
With the “how” in mind, we run a type of usability test during the demand validation phase to experiment with drastically different manifestations of a product concept. For example, we might want to know whether a product would be more valuable as a phone app or tablet app. When we’ve already established demand for a feature or function, it can be important to flesh out interactive mockups of a product so that users can gain an accurate experience and evaluate the practical differences—in this example, between phone or tablet versions of the same app. Generally, we make an effort to examine a perception of value while also tracking the actual behavior and flow of the user through the variant features/functions.
In this article, I’ve included just a few of the many types of experiments you can run in the demand validation phase—from ideation to prototype development. Experimentation is a process that must be tested and customized to meet your business goals. Use this playbook as a potential first step in demand validation and revise it accordingly.
In an effort to revise and improve this process, please comment below on your own positive or negative experiences with this playbook.