Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Get exclusive access to thought-provoking articles, bonus podcast content, and cutting-edge whitepapers. Become a member of the UX Magazine community today!

Home ›› Design ›› Design Thinking ›› Designing for Oops

Designing for Oops

by Paivi Salminen
4 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

Have you ever noticed how often you mess up small things? Sending messages to the wrong contact, losing track of why you’re in a room, pushing when you should pull. These aren’t personal failures; they’re proof that mistakes are part of being human. Yet our medical systems still blame individuals instead of fixing the broken design. The aviation industry transformed safety by embracing error reporting without penalizing those involved. Factory floors give any worker the power to halt production when they spot trouble. But preventable medical mistakes still kill thousands. Explore why we build systems that demand perfection from imperfect humans and how smart design could finally change that.

We tend to treat mistakes as personal failures, lapses in discipline, focus, or intelligence. But anyone who has ever sent a text message to the wrong person, walked into a room and forgotten why, or turned a key the wrong direction knows: human error isn’t an exception. It’s the rule.

The real issue isn’t that humans make mistakes. The issue is that most of our systems pretend we don’t.

If we want safer healthcare and hospitals, friendlier devices, and less chaos in daily life, we need to understand why errors happen and how smart design can keep them from spiraling into disasters.

Here’s a simple framework for thinking about human error, inspired by Don Norman’s book The Design of Everyday Things, and why the healthcare system desperately needs to pay attention.

Watch Don Norman discuss design and AI on the Invisible Machines podcast

Why error happens: the human brain isn’t a machine

People forget. They get distracted. They rely on habits. They make assumptions. This isn’t a moral failing; it’s cognitive reality.

Most environments, however, are built as if humans are flawless executors: “Just pay attention!” “Just remember!” “Just double-check!”

But “just” is doing a lot of heavy lifting there. Any system that depends on perfect memory, perfect attention, or perfect calm is already flawed. Human error isn’t random; it’s predictable. And if it’s predictable, it can be designed for.

Slips vs. mistakes: two types of human error

Understanding the difference between slips and mistakes matters because each requires a different solution.

Slips: right intention, wrong execution

You meant to turn the lock clickwise, but went the other way. You meant to grab your glasses, but picked up your sunglasses. You meant to click “Save,” but hit “Delete.”

Slips are errors of attention and action. They happen when the environment doesn’t provide enough feedback or clarity.

Mistakes: wrong intention from the start

You thought the meeting was at 2 p.m., but it was at 1. You assumed a button did one thing, but it did another.

Mistakes are errors in mental models, the underlying understanding of how something works.

Slips need better design. Mistakes need a better understanding.

Social and institutional pressures

Even when we notice an error, we often stay quiet. Why? Because errors carry social cost. People fear embarrassment, discipline, or reputational damage.

  • Workers hide mistakes so they don’t look incompetent.
  • Professionals worry that reporting errors will end careers.
  • Institutions bury problems to avoid liability or scandal.

When an error becomes something shameful, people stop talking about it. When they stop talking, the system loses the very information it needs to improve. Silence is the enemy of safety.

Reporting error: when admitting “oops” becomes the superpower

Some industries have learned this lesson. For example, aviation is a standout. In the USA, NASA created a voluntary, semi-anonymous reporting system that allows pilots to report their own mistakes without fear of punishment. Once the report is processed, NASA removes identifying details. The goal is learning, not blame.

This single design choice, treating error reports as valuable data, transformed flying into one of the safest activities humans do. Imagine that mindset everywhere else: errors aren’t confessions. They’re clues.

Listen to Dan Goldin, former administrator of NASA, on innovation and the 50/50 rule on the Invisible Machines podcast

Detecting error: catching the problem before it explodes

Toyota offers a masterclass in error detection. Their concept of Jidoka encourages any worker on the assembly line to pull the andon cord when something seems off. Production stops. The team gathers. They ask “Why?” again and again until the root cause emerges.

No shame. No hiding. No, “just be more careful next time.”

It’s an institutional acknowledgement that errors should be caught early, ideally before the defective part moves any further.

Hospitals and healthcare systems, by contrast, often operate with the cultural equivalent of “don’t pull the cord unless you’re absolutely sure.” In a high-pressure environment, that hesitation is costly.

Designing for error: making the wrong thing hard and the right thing obvious

If reporting and detecting errors are reactive, designing for error is proactive. This is the world of poka-yoke: error-proofing. The idea is to create systems that make mistakes difficult or impossible. You see it everywhere:

  • A microwave won’t start unless the door is closed.
  • A car will make a sound if you haven’t fastened a seatbelt.
  • A USB-C or a plug only fits one way.

These designs keep humans from needing to be perfect. They replace vigilance with structure. At home, tiny design tweaks, e.g., a dedicated hook or bowl by the door for keys, do more for reliability than “trying harder” ever will.

The big question: why not medicine?

Healthcare is one of the most complex systems humans have built and also one of the least forgiving of mistakes. Yet the stakes couldn’t be higher.

The medical field faces every barrier discussed above: fear or lawsuits, fear of blame, institutional concerns about reputation, hierarchical cultures that discourage speaking up, environments that require superhuman vigilance after 11 hours of working in a shift, etc.

But if aviation can set up nonpunitive reporting systems, and manufacturing can empower workers to halt production, and consumer products can use poka-yoke to prevent predictable slips, why hasn’t medicine embraced these same principles? We already know how to build safer systems, so the real question is:

What would it take to finally apply these principles where they matter most: in the systems that care for human lives?

The article originally appeared on Substack.

Featured image courtesy: Randy Laybourne.

post authorPaivi Salminen

Paivi Salminen
Päivi Salminen, MSc, is a digital health innovator turned researcher with over a decade of experience driving growth and innovation across start-ups and international R&D projects. After years in the industry, she has recently transitioned into academia to explore how user experience and design thinking can create more equitable and impactful healthcare solutions. Her work bridges business strategy, technology, and empathy, aiming to turn patient and clinician insights into sustainable innovations that truly make a difference.

Tweet
Share
Post
Share
Email
Print
Ideas In Brief
  • The article explains why mistakes happen, not because we’re careless, but because most systems are built as if humans never mess up.
  • It demonstrates how slips (doing the wrong thing) and mistakes (thinking the wrong thing) require different solutions, including better design for slips and a deeper understanding of mistakes.
  • The piece outlines how aviation and factories prevent errors by removing blame, allowing workers to stop production when something’s off, and designing systems that make it difficult to do the wrong thing, and asks why healthcare hasn’t done the same.

Related Articles

Learn how understanding user emotions can create intuitive, supportive designs that build trust and loyalty.

Article by Pavel Bukengolts
The Role of Emotion in UX: Embracing Emotionally Intelligent Design
  • The article emphasizes that emotionally intelligent design is key to creating meaningful UX that satisfies users and drives business success.
  • It shows how understanding users’ emotions — through research, empathy mapping, journey mapping, and service blueprinting — can reveal hidden needs and shape more intuitive, reassuring digital experiences.
  • The piece argues that embedding empathy and emotional insights into design strengthens user engagement, loyalty, and overall satisfaction.
Share:The Role of Emotion in UX: Embracing Emotionally Intelligent Design
5 min read

As AI takes on more of the solution work, the real craft of design shifts to how we frame the problem. This piece explores why staying with uncertainty and resisting the urge to rush to answers may be a designer’s most powerful skill.

Article by Morteza Pourmohamadi
The Frame, the Illusion, and the Brief
  • The article highlights that as AI takes over more of the solution work, the designer’s true craft lies in framing the problem rather than rushing to solve it.
  • It shows how cognitive biases like the need for closure or action bias can distort our perception, making careful problem framing essential for clarity and creativity.
  • The piece argues that framing is itself a design act — a practice of staying with uncertainty long enough to cultivate shared understanding and more meaningful outcomes.
Share:The Frame, the Illusion, and the Brief
3 min read

UX isn’t just about screens — it’s about feelings. This article explores why the future of UX depends on blending artificial and emotional intelligence to create truly human experiences.

Article by Krystian M. Frahn
UX is More Than Screens: The Art of Designing Emotions
  • The article shows how Steve Jobs’ shift from “form follows function” to “form follows emotion” transformed design into a deeply human practice centered on empathy.
  • It explains that emotions drive perception, usability, and loyalty — making emotional intelligence essential to meaningful user experiences.
  • The piece argues that the future of UX lies in uniting artificial and emotional intelligence to create technology that feels truly human.
Share:UX is More Than Screens: The Art of Designing Emotions
6 min read

Join the UX Magazine community!

Stay informed with exclusive content on the intersection of UX, AI agents, and agentic automation—essential reading for future-focused professionals.

Hello!

You're officially a member of the UX Magazine Community.
We're excited to have you with us!

Thank you!

To begin viewing member content, please verify your email.

Tell us about you. Enroll in the course.

    This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and