UX Magazine

Defining and Informing the Complex Field of User Experience (UX)
Article No. 1062 July 29, 2013

An Introduction to Designing for Imperfection

I’ve been designing, both in print and digital, for over twenty years.

Lately I've been feeling that for much of that time I have done a more mediocre job than most clients and users give me credit for, and have, in many ways, lucked out when I make good designs.

For about six years now I have worked as a UX designer for agencies, and as a consultant. Working and talking with many different people, for many different organizations, I have heard a lot of people express similar feelings. We don’t really know how to get to good design, and tend to rely on hiring more good people and hoping for the best.

It seems like many strict processes don’t turn out reliably good results, no matter how long they take. Why is that?

Designing for Perfection

What I’ll call the traditional way to design is to address every possible aspect of the system and to seek perfection in everything. I have watched this method grow out of print media—where everything was at fixed scale and of reasonable depth—and classic human factors—which uses very intense analysis to build controls for mechanical systems like those found in aircraft, where failure is more often than not a real catastrophe.

If you think you are cutting edge because you do digital design, you may need to think again. Anyone designing in Photoshop or Fireworks embraces this same print-era pursuit of perfection when they draw a precise comp for every screen of an app or site. In mobile design I see anything from two to seven comps drawn for each page, as the same precision is exercised for each screen size or aspect ratio. Designing for iOS only? You still need iPhone 4S and below, iPhone 5, and iPad, in each orientation—minimum.

Technical design and information architecture also fall into this trap with processes based around use case analysis. Even if your process or organization doesn’t strictly do that, or calls it something other than a use case, the principle is the same: you’re seeking to understand and control every aspect of the system. Lists of use cases are generally hundreds of items long, and each has to be addressed with clearly defined system behaviors.

Iterating for Efficiency

Aside from the large size of these deliverables, this traditional approach doesn’t work for a couple reasons. First, systems are far too complex and diverse (or fragmented if you wish). It is mathematically impractical to address every use case, so we fake it and pick a few on the happy path of the most likely cases. When considering the available combinations of choices a user could truly make on many systems, combinatorial variations are in the billions. I did some rough math on one project and determined the documentation and analysis would take longer than the course of human history up to that time to complete. (If you take, say, 100 functional modes that can be combined uniformly, and work without serious breaks and get approval from your product owner quickly, then it would have taken over 8,000 years to specify every possible state.)

"Fail faster" and similar iterative-centric buzzwords only work well in scientific disciplines

Pixel-perfect design is likewise impossible in this diverse environment; in just a single month late last year I did a quick survey and found around 40 newly-launched mobile handsets or tablets with 40 different screen sizes and almost 30 different resolutions. It would take years to design for every one of these screen sizes, and the very next week the world has already moved on with more devices launched. Insisting that one platform is best and ignoring the others is not the answer, by the way, and seems to be leading to lots of apps that don’t even work when you rotate the screen.

The current, state-of-the-art processes for addressing these concerns involves various iterative methodologies, such as Agile. These are mostly development processes applied (or forced) onto UX design disciplines that focus on breaking work up into smaller chunks. In principle, each deliverable is something that can be analyzed or designed, and complexity emerges over time in a controlled manner as iterations add to it. In practice, many of these methodologies become simply incremental processes, which means the same work as always is now done in phases.

Lean methods build on these with principles of doing the most important work first, and allowing products to evolve organically. Practically, these approaches end up disregarding design and, seeking speed over quality, don’t really move us forward from a UX point of view.

Learning Nothing from Failure

The concept of "fail faster" and similar iterative-centric buzzwords only work well in scientific disciplines. As humans, we are inclined to evaluate the results of activities, designs, and events in the past as being more predictable than they were before they took place.

This hindsight bias means you have an illusion of understanding what went wrong which may not be accurate. This “I knew it all along” effect means people tend to remember past events as being predictable at the time those events happened. Our memory is inaccurate; we remember only the events or information that back our pre-existing worldview.

It doesn’t mean failure is bad. Science and more formal engineering disciplines apply it well, and have done it this way for centuries. But you have to have clear goals, expected outcomes, and some control over variables. Then you have to take the time to measure (without bias) and accurately communicate the results. More formalized “post-mortem” reviews like the U.S. military after action review help to tease out these issues in less technical areas like project management and team interaction.

In design and software development, speed kills. Without stopping to make sure we are doing the right thing, we rapidly make the same mistakes not once but over and over again.

I find it easy to integrate these analytical methods into the design process. Even when working alone, and without formal evaluation it is worth looking at mistakes, changes and past iterations to see why the failure occurred or the change was required.

As designers and developers we forget to look up to see where we are in the bigger picture. We try to block things out into pixels and comfortable code templates. We need to step back occasionally, just for a minute, and think in broad brushstrokes. How are our detailed decisions impacting the way the whole product works, and the way it will integrate with users’ lives?

Constraints

There are many pressures put on design and implementation teams. A lot of them are specific to industries, organizations, or projects. Often, when I come into an organization to consult, I find that many elements that are assumed to be constraints are actually variables, or at least arguable. I’d stipulate that the real constraints are:

  • Arbitrary Complexity: Most systems are so complex that they cannot be modeled with pencil and paper (or whiteboard, or flowchart, or wireframe) in sufficient detail to execute design.
  • Systems All the Way Down: The complex application or website is embedded into a much more complex OS, in a differently-complex device, running on a vastly complex network, carried by a person with their own needs and behaviors, living in a world of other people and really complex systems. You cannot predict what might happen to these variables and how they will affect your little part of the world.
  • Networks of Delay: Everything is connected now, but traditional network models fail us. Instead, we have built a giant, distributed supercomputer that we all carry windows into in our pockets. Our devices, apps, and websites just delay and interfere with access to this information, to varying degrees. Anything that reduces delay is good. Anything that gets in the way or exposes edges of the network is bad.
  • Systems are Fault-Intolerant: Right now, technical systems (software, data) fundamentally expect perfect information and absolute precision in computation. They are intolerant of imprecision, missing data, and failures to connect unless that behavior is also precisely designed.
  • Users are Fault-Intolerant: While people are great at estimating, fudging, and getting by, they don’t like to use tools that break or cause them to work harder. Very simple gaps in “look and feel” can induce familiarity discomfort similar to the uncanny valley of anthropomorphized representations. At the very least, you will encounter reduced satisfaction and trust. Often, that means users will give up and use a competitor’s product or service.

Bound to Fail

Relatively few people have been working on other methods that try to address some of these issues by being more flexible, working like the products, and working with the environments we design for. These methods and principles are less unified and are left without a single name. All of these methods are based on designs that are less scale-dependent (so work on multiple screen sizes); are more modular to encourage re-use; and annotate heavily to assure better coordination with implementation and quality assurance.

In the past decade or two, resilience engineering has become an important technical practice area. Resilience in this sense is the ability of a system to absorb disruptions without tipping over to a new kind of order. A building, when exposed to too much lateral ground movement, settles into a state of “rubble on the ground” unless you design it to resist this disruption adequately. Services like Google, Facebook, Etsy, Flickr, Yahoo!, and Amazon use resilience engineering teams and principles not to make sure their data centers stay running, but to assure that when failures occur, the end users don’t notice.

Our systems are embedded in other technical systems, embedded into complex systems: like the user carrying it around, who may be subject to rain, and traffic delays, and screaming babies. We can’t predict every failure point, but can plan to fail gracefully instead.

Embracing Errors and Complexity

Building on flexible UX design methods, mixed with these principles of resilience engineering, I propose another approach to designing for these constraints of complexity, error, delay, and intolerance. Instead of fighting these, we can design better systems by embracing failure and uncertainty.

In 1952, computing pioneer John Von Neumann called computational errors “an essential part of the process” of computing. This was almost forgotten for decades with the advent of the high-reliability computers we work on today. But increasing amounts of data and the need for power conservation are creating a need for unique solutions that leverage probabilistic design and ignore unused data or let small errors occur in computing.

This is surprisingly not new even to visual and graphic design. The technical methods of printing machinery have inherent inaccuracies and behaviors that make certain apparently possible designs insufficiently reliable for mass production. Print design has to account for ink being placed in the slightly wrong place, for overlaps on adjacent layers, and for slight errors in binding, folding or trimming.

In print (and package design) these are all considered tolerances, or normal variations, not errors. Accounting for overlaps and inaccuracies in ink placement is a whole (very specialized) practice area called trapping. I have books on my shelf dedicated just to trapping.

For the UX practice to account for tolerances and normal variation instead of lamenting fragmentation, complexity, and error means letting go of perfect solutions to create better, more robust systems and solutions. We need to embrace the messiness and complexity of systems—and of real life.

To apply these approaches to interaction design, with all the additional complexities, really doesn’t call for anything new. We already do many of these things, but need to switch from considering them as discrete tasks to seeing them as principles that are applied regularly and repeatedly throughout the process of designing any project.

Here are some things we can do to design for imperfection:

  • Assume the Unknown: No matter how many questions you ask or how much research you do, there will be behaviors you cannot predict or understand. This will keep happening. A minor change or a shift in trends will cause use of your system to change.
  • Think Systematically: All the data we gather on personas and the concepts we use to develop storyboards are not inputs to the design process, they are the design process. Systems thinking is about setting aside constraints and considering how the people, paperwork, email, data storage, and every other part make up the whole. So, in every phase of the project, explicitly discuss the user, your expectations of their behaviors and needs, and how those are met in every way.
  • Use and Make Patterns and Guidelines: Art and science both evolve by building on what has come before. Use existing patterns by understanding how they apply to your work. Create patterns for your product rather than screens. Design and document each widget to be as generalized as possible so you re-use and re-apply instead of designing each interaction from scratch.
  • Analyze and Focus: Admit you do not know a lot about the pressures, behaviors, and inputs of the user and customer. But don’t just throw your hands in the air and design for those you do know. List out likely needs, uses, and behaviors, classified in a way that they can cover not specific actions (We’re not modeling use cases) but classes that the design can respond to. Rank these by their chance of happening and focus on the high-chance classes.

Conclusion

These are not steps in a process but constant activities carried out regularly throughout design and—if you can get everyone to play along—execution. I’ll take a closer look at these activities in action in a future article. For now, keep in mind that this is very much not a process. Design tools do not really matter, as long as you are flexible enough to consider whatever tool is needed, and to solve problems as they arise, not as they are constrained by your tools.

If you’ve tried any of these approaches or have any thoughts on what’s been discussed here, please comment below.

This article was most most-recently inspired by Gary Anthes’ article "Inexact Design: Beyond Fault Tolerance" on probabilistic design of ICs in the April 2013 edition of Communications of the ACM.

Image of quail eggs courtesy Shutterstock.

ABOUT THE AUTHOR(S)

User Profile

Steven Hoober is a mobile strategist, architect and interaction designer whose 4ourth Mobile helps large companies, mobile service providers, and startups understand how to exploit mobile technology to meet the needs of their users. He has been doing mobile and multi-channel design since 1999, working on everything from the earliest app stores, to browser design, to pretty much everything but games. Steven wrote the patterns and technical appendices for the book Designing Mobile Interfaces, maintains a repository of mobile design and development information at the 4ourth Mobile Patterns Wiki, and publishes a regular column on mobile in UX Matters.

Add new comment

Comments

46
62

And, to burden the comments, another article I forgot that influenced me halfway through writing this up. http://www.andyfitzgerald.org/the-trouble-with-system/

50
52

Another rather nice take on thinking about systems and how users work, that arrives at much my same point, from a totally different point of view http://www.poetpainter.com/thoughts/article/from-paths-to-sandboxes

I especially like this part "These are platforms. You can make of it what you want. There is no prescribed way to use the system."

56
63

The whole concept of resilience engineering is getting a lot of useful traction in traditional CS schools of thought, such that it's starting to be considered as an organizational-level behavior. This article, despite tending to be about software, engineering, etc. is really interesting for almost any organization. -- "The Antifragile Organization" by Ariel Tseitlin, which you can read here http://cacm.acm.org/magazines/2013/8/166315-the-antifragile-organization/abstract but only if you are a member of ACM (which I suggest as it's got lots and lots of good articles and a huge library of research on UX topics). They talk about using it operationally, inducing failure constantly in order to learn how to make systems not fail. I think it points out how you can embed the process, so even if not a robot army you can consider failure throughout your design process, and pushed through to analysts (BAs) so they evaluate every component, at every step, for novel failures so we can solve before launch.

40
52

I've been made aware of a few interesting articles (or books) from the past that may be I should have found with more diligent research. Listed without comment, in no particular order, in case you wish to continue pursuing this concept: http://alistapart.com/article/the-elegance-of-imperfection http://alistapart.com/article/the-web-aesthetic http://www.academia.edu/349078/The_value_of_imperfection_in_sustainable_design_The_emotional_tie_with_perfectible_artefacts_for_longer_lifespan http://queue.acm.org/detail.cfm?id=2371297 http://www.resilientdesign.org/rdis-resilient-design-principles-need-your-feedback/ Does any of this remind you of something else you read (or wrote)? Put it up here for everyone to see and discuss.

49
54

Thanks for the article, really great. One small comment: When you say "think systematically", I find it better to say "think systemically". A systemic viewpoint is different than a systematic one (focus on the whole vs. focus on tasks at hand).

44
58

Interesting point. I totally agree with how you describe the distinction, but have encountered it with this sort of distinction: "The adjective systematic means (1) carried out using step-by-step procedures, or (2) of, characterized, or constituting a system. It typically describes carefully planned processes that unfold gradually. Systemic, which is narrower in definition, means systemwide or deeply engrained in the system. It usually describes habits or processes that are difficult to reverse because they are built into a system." (http://grammarist.com/usage/systematic-systemic/). I am still worried about the use of systemic for the "deeply ingrained" part as well as the negative connotations. I rarely enocunter positive systemic behaviors. But, I can be swayed. Thoughts???

50
53

Great article, Steven! I'm early in the process of defining guidelines for my company, and am thinking much along the same lines, so your writing resonated with me. Standards documents often focus on visual design, which is expected by stakeholders, but as you noted, it's impossible to specify enough to really inform implementation, and leaves much of what truly impacts users up for interpretation.

Another component of this for my work is the design of an authoring environment. I'm focusing on embedding standards and guidelines into the tools that are used to generate sites, such as content management systems, rather than creating a large document with instructions and rules. I'd prefer to empower people to build standards-compliant, consistent sites. Finding a balance between flexibility and consistency is the challenge in this context, but defining and documenting guiding principles and interaction patterns helps to foreground the users' experience - beyond colors and fonts.

51
58

I'm involved in a redesign project where I'm using the Patterns/Guidelines over Detailed Comps model right now. Definitely stacks up better when you're aiming for a large user group of varying technical, social and demographic status like we are. Good read.

48
53

Arturo, I never did bother finding the source for this, picking part of it from the referenced article. "In 1952, the computing pioneer John von Neumann, in a lecture at Caltech on "error in logics," argued that computational errors should not be viewed as mere "accidents," but as "an essential part of the process." That is, error has to be treated as an intrinsic part of computing."

As far as I know there is no film/video/audio of his lecture series, but Von Neumann synthesized his 1952 lecture series into a paper published in 1956 Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Components.

There is much good discussion of the concept throughout the paper, though much of it also a tedious exploration of logical examples. The quote is from the second paragraph:

The subject-matter, as the title suggests, is the role of error in logics, or in the physical implementation of logics – in automata-synthesis. Error is viewed, therefore, not as an extraneous and misdirected or misdirecting accident, but as an essential part of the process under consideration – its importance in the synthesis of automata being fully comparable to that of the factor which is normally considered, the intended and correct logical structure.

I've added hyperlinks in the article to the (poorly scanned) full text and some slightly differently (and differently badly scanned, but slightly searchable) text.

45
57

I would love to know the citation for the Von Neumann's comment.