Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Get exclusive access to thought-provoking articles, bonus podcast content, and cutting-edge whitepapers. Become a member of the UX Magazine community today!

Home ›› Design ›› Design Thinking ›› Designing for Oops

Designing for Oops

by Paivi Salminen
4 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

Have you ever noticed how often you mess up small things? Sending messages to the wrong contact, losing track of why you’re in a room, pushing when you should pull. These aren’t personal failures; they’re proof that mistakes are part of being human. Yet our medical systems still blame individuals instead of fixing the broken design. The aviation industry transformed safety by embracing error reporting without penalizing those involved. Factory floors give any worker the power to halt production when they spot trouble. But preventable medical mistakes still kill thousands. Explore why we build systems that demand perfection from imperfect humans and how smart design could finally change that.

We tend to treat mistakes as personal failures, lapses in discipline, focus, or intelligence. But anyone who has ever sent a text message to the wrong person, walked into a room and forgotten why, or turned a key the wrong direction knows: human error isn’t an exception. It’s the rule.

The real issue isn’t that humans make mistakes. The issue is that most of our systems pretend we don’t.

If we want safer healthcare and hospitals, friendlier devices, and less chaos in daily life, we need to understand why errors happen and how smart design can keep them from spiraling into disasters.

Here’s a simple framework for thinking about human error, inspired by Don Norman’s book The Design of Everyday Things, and why the healthcare system desperately needs to pay attention.

Watch Don Norman discuss design and AI on the Invisible Machines podcast

Why error happens: the human brain isn’t a machine

People forget. They get distracted. They rely on habits. They make assumptions. This isn’t a moral failing; it’s cognitive reality.

Most environments, however, are built as if humans are flawless executors: “Just pay attention!” “Just remember!” “Just double-check!”

But “just” is doing a lot of heavy lifting there. Any system that depends on perfect memory, perfect attention, or perfect calm is already flawed. Human error isn’t random; it’s predictable. And if it’s predictable, it can be designed for.

Slips vs. mistakes: two types of human error

Understanding the difference between slips and mistakes matters because each requires a different solution.

Slips: right intention, wrong execution

You meant to turn the lock clickwise, but went the other way. You meant to grab your glasses, but picked up your sunglasses. You meant to click “Save,” but hit “Delete.”

Slips are errors of attention and action. They happen when the environment doesn’t provide enough feedback or clarity.

Mistakes: wrong intention from the start

You thought the meeting was at 2 p.m., but it was at 1. You assumed a button did one thing, but it did another.

Mistakes are errors in mental models, the underlying understanding of how something works.

Slips need better design. Mistakes need a better understanding.

Social and institutional pressures

Even when we notice an error, we often stay quiet. Why? Because errors carry social cost. People fear embarrassment, discipline, or reputational damage.

  • Workers hide mistakes so they don’t look incompetent.
  • Professionals worry that reporting errors will end careers.
  • Institutions bury problems to avoid liability or scandal.

When an error becomes something shameful, people stop talking about it. When they stop talking, the system loses the very information it needs to improve. Silence is the enemy of safety.

Reporting error: when admitting “oops” becomes the superpower

Some industries have learned this lesson. For example, aviation is a standout. In the USA, NASA created a voluntary, semi-anonymous reporting system that allows pilots to report their own mistakes without fear of punishment. Once the report is processed, NASA removes identifying details. The goal is learning, not blame.

This single design choice, treating error reports as valuable data, transformed flying into one of the safest activities humans do. Imagine that mindset everywhere else: errors aren’t confessions. They’re clues.

Listen to Dan Goldin, former administrator of NASA, on innovation and the 50/50 rule on the Invisible Machines podcast

Detecting error: catching the problem before it explodes

Toyota offers a masterclass in error detection. Their concept of Jidoka encourages any worker on the assembly line to pull the andon cord when something seems off. Production stops. The team gathers. They ask “Why?” again and again until the root cause emerges.

No shame. No hiding. No, “just be more careful next time.”

It’s an institutional acknowledgement that errors should be caught early, ideally before the defective part moves any further.

Hospitals and healthcare systems, by contrast, often operate with the cultural equivalent of “don’t pull the cord unless you’re absolutely sure.” In a high-pressure environment, that hesitation is costly.

Designing for error: making the wrong thing hard and the right thing obvious

If reporting and detecting errors are reactive, designing for error is proactive. This is the world of poka-yoke: error-proofing. The idea is to create systems that make mistakes difficult or impossible. You see it everywhere:

  • A microwave won’t start unless the door is closed.
  • A car will make a sound if you haven’t fastened a seatbelt.
  • A USB-C or a plug only fits one way.

These designs keep humans from needing to be perfect. They replace vigilance with structure. At home, tiny design tweaks, e.g., a dedicated hook or bowl by the door for keys, do more for reliability than “trying harder” ever will.

The big question: why not medicine?

Healthcare is one of the most complex systems humans have built and also one of the least forgiving of mistakes. Yet the stakes couldn’t be higher.

The medical field faces every barrier discussed above: fear or lawsuits, fear of blame, institutional concerns about reputation, hierarchical cultures that discourage speaking up, environments that require superhuman vigilance after 11 hours of working in a shift, etc.

But if aviation can set up nonpunitive reporting systems, and manufacturing can empower workers to halt production, and consumer products can use poka-yoke to prevent predictable slips, why hasn’t medicine embraced these same principles? We already know how to build safer systems, so the real question is:

What would it take to finally apply these principles where they matter most: in the systems that care for human lives?

The article originally appeared on Substack.

Featured image courtesy: Randy Laybourne.

post authorPaivi Salminen

Paivi Salminen
Päivi Salminen, MSc, is a digital health innovator turned researcher with over a decade of experience driving growth and innovation across start-ups and international R&D projects. After years in the industry, she has recently transitioned into academia to explore how user experience and design thinking can create more equitable and impactful healthcare solutions. Her work bridges business strategy, technology, and empathy, aiming to turn patient and clinician insights into sustainable innovations that truly make a difference.

Tweet
Share
Post
Share
Email
Print
Ideas In Brief
  • The article explains why mistakes happen, not because we’re careless, but because most systems are built as if humans never mess up.
  • It demonstrates how slips (doing the wrong thing) and mistakes (thinking the wrong thing) require different solutions, including better design for slips and a deeper understanding of mistakes.
  • The piece outlines how aviation and factories prevent errors by removing blame, allowing workers to stop production when something’s off, and designing systems that make it difficult to do the wrong thing, and asks why healthcare hasn’t done the same.

Related Articles

Discover how human-centered UX design is transforming medtech by cutting costs, reducing errors, and driving better outcomes for clinicians, patients, and healthcare providers alike.

Article by Dennis Lenard
How UX Design is Revolutionising Medtech Cost Efficiency
  • The article explains how strategic UX design in medtech improves cost efficiency by enhancing usability, reducing training time, and minimizing user errors across clinical workflows.
  • The piece argues that intuitive, user-centered interfaces boost productivity, adoption rates, and patient outcomes while lowering support costs and extending product lifecycles, making UX a crucial investment for sustainable growth and ROI in healthcare technology.
Share:How UX Design is Revolutionising Medtech Cost Efficiency
7 min read

Learn when to talk to users, and when to watch them in order to uncover real insights and design experiences that truly work.

Article by Paivi Salminen
Usability Tests vs. Focus Groups
  • The article distinguishes between usability tests and focus groups, highlighting their different roles in UX research.
  • It explains that focus groups gather opinions and attitudes, while usability tests observe real user behavior to find design issues.
  • The piece stresses using each method at the right stage to build the right product and ensure a better user experience.
Share:Usability Tests vs. Focus Groups
2 min read

Explore how interaction data uncovers hidden user-behavior patterns that drive smarter product decisions, better UX, and continuous improvement.

Article by Srikanth R
The Power of Interaction Data: Tracking User Behavior in Modern Web Apps
  • The article explains how interaction data like clicks, scrolls, and session patterns reveals real user behavior beyond basic analytics.
  • It shows how tools such as heatmaps and session replays turn this data into actionable insights that improve UX and product decisions.
  • The piece emphasizes using behavioral insights responsibly, balancing optimization with user privacy and ethical data practices.
Share:The Power of Interaction Data: Tracking User Behavior in Modern Web Apps
14 min read

Join the UX Magazine community!

Stay informed with exclusive content on the intersection of UX, AI agents, and agentic automation—essential reading for future-focused professionals.

Hello!

You're officially a member of the UX Magazine Community.
We're excited to have you with us!

Thank you!

To begin viewing member content, please verify your email.

Get Paid to Test AI Products

Earn an average of $100 per test by reviewing AI-first product experiences and sharing your feedback.

    Tell us about you. Enroll in the course.

      This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and