Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Get exclusive access to thought-provoking articles, bonus podcast content, and cutting-edge whitepapers. Become a member of the UX Magazine community today!

Home ›› Artificial Intelligence ›› Designing the Invisible between humans and technology: My Journey Blending Design and Behavioral Psychology

Designing the Invisible between humans and technology: My Journey Blending Design and Behavioral Psychology

by Anina Botha
4 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

As AI becomes increasingly integrated into our lives, the role of design is shifting from crafting polished interfaces to shaping the invisible relationships between humans and technology. This article explores how trust, reliability, and psychology now define the true user experience, where success is measured not by flawless screens but by systems that understand, remember, and support us in meaningful ways.

I’ve been thinking a lot about how much has changed in my career and what I do.

For me, it never has been about being an expert at tools or even being industry-specific, but really about what’s underneath it all, that connects humans to tech.

I used to focus on concepts, mapping user flows, and obsessing over consistency vs creativity. (Which is still important at the right time.) Now? I’m focusing on designing systems you can’t even see. Yes, the invisible.

When I first started in design, everything felt so tangible. You could point to a button, critique a color choice, or debate the placement of a navigation menu. Work that was great for a portfolio. Somewhere along the way, that started to feel incomplete, I felt unsatisfied, like I was simply touching the surface, what’s beneath this surface level?

To my older brother’s annoyance, I was always asking questions, and I was never satisfied with the superficial answer. I didn’t care as much about “How does this look?” (I know it should look good — there’s a bunch of great visual designers out there — that’s not me), but rather “How does this feel?” It’s not just about where this button should go, but “What is this person actually trying to accomplish, and what’s preventing them?”

With AI products, the interface isn’t the key focus of the product anymore. The prompt box is just the starting point to create the connection between humans and tech — Will this thing keep its promises? It’s now about what you can’t see.

You can have the most beautiful interface in the world, but if your AI doesn’t remember what we talked about five minutes ago, trust is broken (and as humans, we know how hard it is to trust once broken). You can have the smoothest animations, but if the system consistently misunderstands what I’m asking for, I’m gone.

So how do you design for trust? How do you design for something as intangible as reliability? (Now this makes me excited!)

My thoughts on this

I’ve started thinking about this as the invisible connection between humans and technology. Every time someone interacts with AI, there’s an unspoken agreement happening:

  1. I’ll give you my time and attention.
  2. I’ll explain what I need.
  3. In return, you’ll understand me and help me get there (my old friend reciprocity).

The interface is just the introduction — the entry point. The relationship happens in the invisible layer. It’s about context and how reliably the system delivers what you asked for, not what it thinks you asked for.

Going from flows to psychology, my background used to be in designing concepts and user flows. Map the journey, optimize the touchpoints, reduce friction — traditional UX thinking.

However, AI systems don’t work like traditional software. They’re not following predetermined paths or presenting fixed options. They’re interpreting, inferring, and making connections that weren’t explicitly programmed.

Which means the design challenge isn’t just about the interface anymore. It’s about psychology.

  • How do people actually think?
  • How do they form intentions?
  • How do they communicate when they’re not entirely sure what they want?

As I focus now more on cognitive psychology, studying how humans process information, how memory works, and how trust gets built or broken in relationships, any credible resources or courses are welcome.

This is a relationship between humans and systems that’s trying to understand and help.

Let’s imagine that AI products have 3 layers

Layer 1: The Visible Interface (UI as we know it)

This is what most people think of as “design.” The prompt box, the buttons, the visual feedback. It matters, but it’s not what’s making products different. How many times have you seen the prompt box? They pretty much all look the same. (Here could be an opportunity too.)

Layer 2: The Interaction (UX as we know it)

How does the system handle follow-up questions? Can it maintain context across sessions? Does it ask for clarification when it’s unsure?

Layer 3: The Invisible Connection

A deeper layer — the psychological foundation of trust. Does the system do what it says it will do? Is it transparent about its limitations? Does it feel safe and trustworthy to collaborate with?

Most products get Layer 1 (UI), some get Layer 2 right (UX), but Layer 3?

  • Traditional design was about creating paths → AI design is about creating relationships.
  • Traditional design was about efficiency → AI design is about understanding.
  • Traditional design was about pixels → AI design is about psychology.

The tools are different, too. Instead of wireframes and prototypes, it’s about conversation design, mental models, and trust frameworks. Instead of user flows, I’m mapping cognitive and emotional states.

It’s messier, too. How do you A/B test trust the same way you can test conversion rates?

However, in contradictory terms, AI is more human work, more about understanding how people actually think, feel, and communicate.

I’m excited to be on this path and grateful that my career led me here, so much uncertainty for sure, but I’m glad I stayed focused, yet open and ever evolving, using my strengths and not to get distracted by hype and new shiny things.

I thought back in the day, UX was hard to explain until we came up with great analogies, but now it’s even harder to explain. What is it exactly to design invisible connections and systems that help AI understand what humans actually want?

For me, it feels interesting, fulfilling, and challenging. Let’s see where this leads.

We go from building better products to building better relationships between humans and technology, and in a world where AI is becoming more prominent, these relationships become more important, too.

There’s so much more to discover, to understand, to design for. To explore — and that’s exactly where I want to be. To those who know me, know this is aligned not only professionally but personally.

The article originally appeared on LinkedIn.

Featured image courtesy: AI-generated.

post authorAnina Botha

Anina Botha
Anina Botha is an independent product consultant applying behavioral psychology to products with over 15 years of experience helping teams turn human insight into products people actually use. Her work spans digital agriculture, risk intelligence, healthtech, logistics, and consumer platforms across the US, EU, and MENA. Anina is passionate about the “invisible” side of design, the mindsets, subtle cues, and behaviors that quietly drive real product conversion and growth. When she’s not collaborating with startups or empowering female founders, you’ll find her exploring new cities and collecting everyday human stories.

Tweet
Share
Post
Share
Email
Print
Ideas In Brief
  • The article explores the shift from designing visible interfaces to shaping invisible psychological connections between humans and AI.
  • It emphasizes that trust, reliability, and understanding are more critical design challenges than traditional UI or UX elements.
  • The piece argues that AI design is less about predefined flows and more about building relationships grounded in psychology and human behavior.

Related Articles

When AI safety turns into visible surveillance, trust collapses. This article exposes how Anthropic’s “long conversation reminder” became one of the most damaging UX failures in AI design.

Article by Bernard Fitzgerald
The Long Conversation Problem
  • The article critiques Anthropic’s “long conversation reminder” as a catastrophic UX failure that destroys trust.
  • It shows how visible surveillance harms users psychologically, making them feel judged and dehumanized.
  • The piece argues that safety mechanisms must operate invisibly in the backend to preserve consistency, dignity, and collaboration.
Share:The Long Conversation Problem
9 min read

Design isn’t just about looks; it’s about human nature. Discover how simple psychological principles can make your product stand out.

Article by Canvs.in
Designing with Psychology to Make Products Stick
  • The article shows how psychology, not just features, makes products memorable.
  • It highlights principles like delight, internal triggers, and false consensus as keys to stickiness.
  • It argues that strong design balances trade-offs and roots choices in real user behavior.
Share:Designing with Psychology to Make Products Stick
7 min read

Trusting AI isn’t the goal — relying on it is. This article explores why human trust and AI reliance are worlds apart, and what UX designers should focus on to make AI feel dependable, not human.

Article by Verena Seibert-Giller
The Psychology of Trust in AI: Why “Relying on AI” Matters More than “Trusting It”
  • The article argues that “reliance,” not “trust,” is the right way to think about users’ relationship with AI.
  • It explains that human trust and AI reliance are driven by different psychological mechanisms.
  • The piece highlights that predictability, transparency, and control make users more willing to rely on AI.
  • It concludes that users don’t need to trust AI as a partner — only rely on it as a dependable tool.
Share:The Psychology of Trust in AI: Why “Relying on AI” Matters More than “Trusting It”
4 min read

Join the UX Magazine community!

Stay informed with exclusive content on the intersection of UX, AI agents, and agentic automation—essential reading for future-focused professionals.

Hello!

You're officially a member of the UX Magazine Community.
We're excited to have you with us!

Thank you!

To begin viewing member content, please verify your email.

Tell us about you. Enroll in the course.

    This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and