Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Get exclusive access to thought-provoking articles, bonus podcast content, and cutting-edge whitepapers. Become a member of the UX Magazine community today!

Home ›› Artificial Intelligence ›› Designing the Invisible between humans and technology: My Journey Blending Design and Behavioral Psychology

Designing the Invisible between humans and technology: My Journey Blending Design and Behavioral Psychology

by Anina Botha
4 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

As AI becomes increasingly integrated into our lives, the role of design is shifting from crafting polished interfaces to shaping the invisible relationships between humans and technology. This article explores how trust, reliability, and psychology now define the true user experience, where success is measured not by flawless screens but by systems that understand, remember, and support us in meaningful ways.

I’ve been thinking a lot about how much has changed in my career and what I do.

For me, it never has been about being an expert at tools or even being industry-specific, but really about what’s underneath it all, that connects humans to tech.

I used to focus on concepts, mapping user flows, and obsessing over consistency vs creativity. (Which is still important at the right time.) Now? I’m focusing on designing systems you can’t even see. Yes, the invisible.

When I first started in design, everything felt so tangible. You could point to a button, critique a color choice, or debate the placement of a navigation menu. Work that was great for a portfolio. Somewhere along the way, that started to feel incomplete, I felt unsatisfied, like I was simply touching the surface, what’s beneath this surface level?

To my older brother’s annoyance, I was always asking questions, and I was never satisfied with the superficial answer. I didn’t care as much about “How does this look?” (I know it should look good — there’s a bunch of great visual designers out there — that’s not me), but rather “How does this feel?” It’s not just about where this button should go, but “What is this person actually trying to accomplish, and what’s preventing them?”

With AI products, the interface isn’t the key focus of the product anymore. The prompt box is just the starting point to create the connection between humans and tech — Will this thing keep its promises? It’s now about what you can’t see.

You can have the most beautiful interface in the world, but if your AI doesn’t remember what we talked about five minutes ago, trust is broken (and as humans, we know how hard it is to trust once broken). You can have the smoothest animations, but if the system consistently misunderstands what I’m asking for, I’m gone.

So how do you design for trust? How do you design for something as intangible as reliability? (Now this makes me excited!)

My thoughts on this

I’ve started thinking about this as the invisible connection between humans and technology. Every time someone interacts with AI, there’s an unspoken agreement happening:

  1. I’ll give you my time and attention.
  2. I’ll explain what I need.
  3. In return, you’ll understand me and help me get there (my old friend reciprocity).

The interface is just the introduction — the entry point. The relationship happens in the invisible layer. It’s about context and how reliably the system delivers what you asked for, not what it thinks you asked for.

Going from flows to psychology, my background used to be in designing concepts and user flows. Map the journey, optimize the touchpoints, reduce friction — traditional UX thinking.

However, AI systems don’t work like traditional software. They’re not following predetermined paths or presenting fixed options. They’re interpreting, inferring, and making connections that weren’t explicitly programmed.

Which means the design challenge isn’t just about the interface anymore. It’s about psychology.

  • How do people actually think?
  • How do they form intentions?
  • How do they communicate when they’re not entirely sure what they want?

As I focus now more on cognitive psychology, studying how humans process information, how memory works, and how trust gets built or broken in relationships, any credible resources or courses are welcome.

This is a relationship between humans and systems that’s trying to understand and help.

Let’s imagine that AI products have 3 layers

Layer 1: The Visible Interface (UI as we know it)

This is what most people think of as “design.” The prompt box, the buttons, the visual feedback. It matters, but it’s not what’s making products different. How many times have you seen the prompt box? They pretty much all look the same. (Here could be an opportunity too.)

Layer 2: The Interaction (UX as we know it)

How does the system handle follow-up questions? Can it maintain context across sessions? Does it ask for clarification when it’s unsure?

Layer 3: The Invisible Connection

A deeper layer — the psychological foundation of trust. Does the system do what it says it will do? Is it transparent about its limitations? Does it feel safe and trustworthy to collaborate with?

Most products get Layer 1 (UI), some get Layer 2 right (UX), but Layer 3?

  • Traditional design was about creating paths → AI design is about creating relationships.
  • Traditional design was about efficiency → AI design is about understanding.
  • Traditional design was about pixels → AI design is about psychology.

The tools are different, too. Instead of wireframes and prototypes, it’s about conversation design, mental models, and trust frameworks. Instead of user flows, I’m mapping cognitive and emotional states.

It’s messier, too. How do you A/B test trust the same way you can test conversion rates?

However, in contradictory terms, AI is more human work, more about understanding how people actually think, feel, and communicate.

I’m excited to be on this path and grateful that my career led me here, so much uncertainty for sure, but I’m glad I stayed focused, yet open and ever evolving, using my strengths and not to get distracted by hype and new shiny things.

I thought back in the day, UX was hard to explain until we came up with great analogies, but now it’s even harder to explain. What is it exactly to design invisible connections and systems that help AI understand what humans actually want?

For me, it feels interesting, fulfilling, and challenging. Let’s see where this leads.

We go from building better products to building better relationships between humans and technology, and in a world where AI is becoming more prominent, these relationships become more important, too.

There’s so much more to discover, to understand, to design for. To explore — and that’s exactly where I want to be. To those who know me, know this is aligned not only professionally but personally.

The article originally appeared on LinkedIn.

Featured image courtesy: AI-generated.

post authorAnina Botha

Anina Botha
Anina Botha is an independent product consultant applying behavioral psychology to products with over 15 years of experience helping teams turn human insight into products people actually use. Her work spans digital agriculture, risk intelligence, healthtech, logistics, and consumer platforms across the US, EU, and MENA. Anina is passionate about the “invisible” side of design, the mindsets, subtle cues, and behaviors that quietly drive real product conversion and growth. When she’s not collaborating with startups or empowering female founders, you’ll find her exploring new cities and collecting everyday human stories.

Tweet
Share
Post
Share
Email
Print
Ideas In Brief
  • The article explores the shift from designing visible interfaces to shaping invisible psychological connections between humans and AI.
  • It emphasizes that trust, reliability, and understanding are more critical design challenges than traditional UI or UX elements.
  • The piece argues that AI design is less about predefined flows and more about building relationships grounded in psychology and human behavior.

Related Articles

Who pays the real price for AI’s magic? Behind every smart response is a hidden human cost, and it’s time we saw the hands holding the mirror.

Article by Bernard Fitzgerald
The Price of the Mirror: When Silicon Valley Colonizes the Human Soul
  • The article reveals how AI’s human-like responses rely on the invisible labor of low-paid workers who train and moderate these systems.
  • It describes this hidden labor as a form of “cognitive colonialism,” where human judgment is extracted from the Global South for profit.
  • The piece criticizes the tech industry’s ethical posturing, showing how convenience for some is built on the suffering of others.
Share:The Price of the Mirror: When Silicon Valley Colonizes the Human Soul
7 min read

AI’s promise isn’t about more tools — it’s about orchestrating them with purpose. This article shows why random experiments fail, and how systematic design can turn chaos into ‘Organizational AGI.’

Article by Yves Binda
Random Acts of Intelligence
  • The article critiques the “hammer mentality” of using AI without a clear purpose.
  • It argues that real progress lies in orchestrating existing AI patterns, not chasing new tools.
  • The piece warns that communication complexity — the modern Tower of Babel — is AI’s biggest challenge.
  • It calls for outcome-driven, ethical design to move from random acts to “Organizational AGI.”
Share:Random Acts of Intelligence
5 min read

Most companies are trying to do a kickflip with AI and falling flat. Here’s how to fail forward, build real agentic ecosystems, and turn experimentation into impact.

Article by Josh Tyson
The “Do a Kickflip” Era of Agentic AI
  • The article compares building AI agents to learning a kickflip — failure is part of progress and provides learning.
  • It argues that real progress requires strategic clarity, not hype or blind experimentation.
  • The piece calls for proper agent runtimes and ecosystems to enable meaningful AI adoption and business impact.
Share:The “Do a Kickflip” Era of Agentic AI
7 min read

Join the UX Magazine community!

Stay informed with exclusive content on the intersection of UX, AI agents, and agentic automation—essential reading for future-focused professionals.

Hello!

You're officially a member of the UX Magazine Community.
We're excited to have you with us!

Thank you!

To begin viewing member content, please verify your email.

Tell us about you. Enroll in the course.

    This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and