I’ve been designing, both in print and digital, for over twenty years.
Lately I’ve been feeling that for much of that time I have done a more mediocre job than most clients and users give me credit for, and have, in many ways, lucked out when I make good designs.
For about six years now I have worked as a UX designer for agencies, and as a consultant. Working and talking with many different people, for many different organizations, I have heard a lot of people express similar feelings. We don’t really know how to get to good design, and tend to rely on hiring more good people and hoping for the best.
It seems like many strict processes don’t turn out reliably good results, no matter how long they take. Why is that?
Designing for Perfection
What I’ll call the traditional way to design is to address every possible aspect of the system and to seek perfection in everything. I have watched this method grow out of print media—where everything was at fixed scale and of reasonable depth—and classic human factors—which uses very intense analysis to build controls for mechanical systems like those found in aircraft, where failure is more often than not a real catastrophe.
If you think you are cutting edge because you do digital design, you may need to think again. Anyone designing in Photoshop or Fireworks embraces this same print-era pursuit of perfection when they draw a precise comp for every screen of an app or site. In mobile design I see anything from two to seven comps drawn for each page, as the same precision is exercised for each screen size or aspect ratio. Designing for iOS only? You still need iPhone 4S and below, iPhone 5, and iPad, in each orientation—minimum.
Technical design and information architecture also fall into this trap with processes based around use case analysis. Even if your process or organization doesn’t strictly do that, or calls it something other than a use case, the principle is the same: you’re seeking to understand and control every aspect of the system. Lists of use cases are generally hundreds of items long, and each has to be addressed with clearly defined system behaviors.
Iterating for Efficiency
Aside from the large size of these deliverables, this traditional approach doesn’t work for a couple reasons. First, systems are far too complex and diverse (or fragmented if you wish). It is mathematically impractical to address every use case, so we fake it and pick a few on the happy path of the most likely cases. When considering the available combinations of choices a user could truly make on many systems, combinatorial variations are in the billions. I did some rough math on one project and determined the documentation and analysis would take longer than the course of human history up to that time to complete. (If you take, say, 100 functional modes that can be combined uniformly, and work without serious breaks and get approval from your product owner quickly, then it would have taken over 8,000 years to specify every possible state.)
“Fail faster” and similar iterative-centric buzzwords only work well in scientific disciplines
Pixel-perfect design is likewise impossible in this diverse environment; in just a single month late last year I did a quick survey and found around 40 newly-launched mobile handsets or tablets with 40 different screen sizes and almost 30 different resolutions. It would take years to design for every one of these screen sizes, and the very next week the world has already moved on with more devices launched. Insisting that one platform is best and ignoring the others is not the answer, by the way, and seems to be leading to lots of apps that don’t even work when you rotate the screen.
The current, state-of-the-art processes for addressing these concerns involves various iterative methodologies, such as Agile. These are mostly development processes applied (or forced) onto UX design disciplines that focus on breaking work up into smaller chunks. In principle, each deliverable is something that can be analyzed or designed, and complexity emerges over time in a controlled manner as iterations add to it. In practice, many of these methodologies become simply incremental processes, which means the same work as always is now done in phases.
Lean methods build on these with principles of doing the most important work first, and allowing products to evolve organically. Practically, these approaches end up disregarding design and, seeking speed over quality, don’t really move us forward from a UX point of view.
Learning Nothing from Failure
The concept of “fail faster” and similar iterative-centric buzzwords only work well in scientific disciplines. As humans, we are inclined to evaluate the results of activities, designs, and events in the past as being more predictable than they were before they took place.
This hindsight bias means you have an illusion of understanding what went wrong which may not be accurate. This “I knew it all along” effect means people tend to remember past events as being predictable at the time those events happened. Our memory is inaccurate; we remember only the events or information that back our pre-existing worldview.
It doesn’t mean failure is bad. Science and more formal engineering disciplines apply it well, and have done it this way for centuries. But you have to have clear goals, expected outcomes, and some control over variables. Then you have to take the time to measure (without bias) and accurately communicate the results. More formalized “post-mortem” reviews like the U.S. military after action review help to tease out these issues in less technical areas like project management and team interaction.
In design and software development, speed kills. Without stopping to make sure we are doing the right thing, we rapidly make the same mistakes not once but over and over again.
I find it easy to integrate these analytical methods into the design process. Even when working alone, and without formal evaluation it is worth looking at mistakes, changes and past iterations to see why the failure occurred or the change was required.
As designers and developers we forget to look up to see where we are in the bigger picture. We try to block things out into pixels and comfortable code templates. We need to step back occasionally, just for a minute, and think in broad brushstrokes. How are our detailed decisions impacting the way the whole product works, and the way it will integrate with users’ lives?
Constraints
There are many pressures put on design and implementation teams. A lot of them are specific to industries, organizations, or projects. Often, when I come into an organization to consult, I find that many elements that are assumed to be constraints are actually variables, or at least arguable. I’d stipulate that the real constraints are:
- Arbitrary Complexity: Most systems are so complex that they cannot be modeled with pencil and paper (or whiteboard, or flowchart, or wireframe) in sufficient detail to execute design.
- Systems All the Way Down: The complex application or website is embedded into a much more complex OS, in a differently-complex device, running on a vastly complex network, carried by a person with their own needs and behaviors, living in a world of other people and really complex systems. You cannot predict what might happen to these variables and how they will affect your little part of the world.
- Networks of Delay: Everything is connected now, but traditional network models fail us. Instead, we have built a giant, distributed supercomputer that we all carry windows into in our pockets. Our devices, apps, and websites just delay and interfere with access to this information, to varying degrees. Anything that reduces delay is good. Anything that gets in the way or exposes edges of the network is bad.
- Systems are Fault-Intolerant: Right now, technical systems (software, data) fundamentally expect perfect information and absolute precision in computation. They are intolerant of imprecision, missing data, and failures to connect unless that behavior is also precisely designed.
- Users are Fault-Intolerant: While people are great at estimating, fudging, and getting by, they don’t like to use tools that break or cause them to work harder. Very simple gaps in “look and feel” can induce familiarity discomfort similar to the uncanny valley of anthropomorphized representations. At the very least, you will encounter reduced satisfaction and trust. Often, that means users will give up and use a competitor’s product or service.
Bound to Fail
Relatively few people have been working on other methods that try to address some of these issues by being more flexible, working like the products, and working with the environments we design for. These methods and principles are less unified and are left without a single name. All of these methods are based on designs that are less scale-dependent (so work on multiple screen sizes); are more modular to encourage re-use; and annotate heavily to assure better coordination with implementation and quality assurance.
In the past decade or two, resilience engineering has become an important technical practice area. Resilience in this sense is the ability of a system to absorb disruptions without tipping over to a new kind of order. A building, when exposed to too much lateral ground movement, settles into a state of “rubble on the ground” unless you design it to resist this disruption adequately. Services like Google, Facebook, Etsy, Flickr, Yahoo!, and Amazon use resilience engineering teams and principles not to make sure their data centers stay running, but to assure that when failures occur, the end users don’t notice.
Our systems are embedded in other technical systems, embedded into complex systems: like the user carrying it around, who may be subject to rain, and traffic delays, and screaming babies. We can’t predict every failure point, but can plan to fail gracefully instead.
Embracing Errors and Complexity
Building on flexible UX design methods, mixed with these principles of resilience engineering, I propose another approach to designing for these constraints of complexity, error, delay, and intolerance. Instead of fighting these, we can design better systems by embracing failure and uncertainty.
In 1952, computing pioneer John Von Neumann called computational errors “an essential part of the process” of computing. This was almost forgotten for decades with the advent of the high-reliability computers we work on today. But increasing amounts of data and the need for power conservation are creating a need for unique solutions that leverage probabilistic design and ignore unused data or let small errors occur in computing.
This is surprisingly not new even to visual and graphic design. The technical methods of printing machinery have inherent inaccuracies and behaviors that make certain apparently possible designs insufficiently reliable for mass production. Print design has to account for ink being placed in the slightly wrong place, for overlaps on adjacent layers, and for slight errors in binding, folding or trimming.
In print (and package design) these are all considered tolerances, or normal variations, not errors. Accounting for overlaps and inaccuracies in ink placement is a whole (very specialized) practice area called trapping. I have books on my shelf dedicated just to trapping.
For the UX practice to account for tolerances and normal variation instead of lamenting fragmentation, complexity, and error means letting go of perfect solutions to create better, more robust systems and solutions. We need to embrace the messiness and complexity of systems—and of real life.
To apply these approaches to interaction design, with all the additional complexities, really doesn’t call for anything new. We already do many of these things, but need to switch from considering them as discrete tasks to seeing them as principles that are applied regularly and repeatedly throughout the process of designing any project.
Here are some things we can do to design for imperfection:
- Assume the Unknown: No matter how many questions you ask or how much research you do, there will be behaviors you cannot predict or understand. This will keep happening. A minor change or a shift in trends will cause use of your system to change.
- Think Systematically: All the data we gather on personas and the concepts we use to develop storyboards are not inputs to the design process, they are the design process. Systems thinking is about setting aside constraints and considering how the people, paperwork, email, data storage, and every other part make up the whole. So, in every phase of the project, explicitly discuss the user, your expectations of their behaviors and needs, and how those are met in every way.
- Use and Make Patterns and Guidelines: Art and science both evolve by building on what has come before. Use existing patterns by understanding how they apply to your work. Create patterns for your product rather than screens. Design and document each widget to be as generalized as possible so you re-use and re-apply instead of designing each interaction from scratch.
- Analyze and Focus: Admit you do not know a lot about the pressures, behaviors, and inputs of the user and customer. But don’t just throw your hands in the air and design for those you do know. List out likely needs, uses, and behaviors, classified in a way that they can cover not specific actions (We’re not modeling use cases) but classes that the design can respond to. Rank these by their chance of happening and focus on the high-chance classes.
Conclusion
These are not steps in a process but constant activities carried out regularly throughout design and—if you can get everyone to play along—execution. I’ll take a closer look at these activities in action in a future article. For now, keep in mind that this is very much not a process. Design tools do not really matter, as long as you are flexible enough to consider whatever tool is needed, and to solve problems as they arise, not as they are constrained by your tools.
If you’ve tried any of these approaches or have any thoughts on what’s been discussed here, please comment below.
This article was most most-recently inspired by Gary Anthes’ article “Inexact Design: Beyond Fault Tolerance” on probabilistic design of ICs in the April 2013 edition of Communications of the ACM.
Image of quail eggs courtesy Shutterstock.