Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Get exclusive access to thought-provoking articles, bonus podcast content, and cutting-edge whitepapers. Become a member of the UX Magazine community today!

Home ›› Artificial Intelligence ›› Designing the Invisible between humans and technology: My Journey Blending Design and Behavioral Psychology

Designing the Invisible between humans and technology: My Journey Blending Design and Behavioral Psychology

by Anina Botha
4 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

As AI becomes increasingly integrated into our lives, the role of design is shifting from crafting polished interfaces to shaping the invisible relationships between humans and technology. This article explores how trust, reliability, and psychology now define the true user experience, where success is measured not by flawless screens but by systems that understand, remember, and support us in meaningful ways.

I’ve been thinking a lot about how much has changed in my career and what I do.

For me, it never has been about being an expert at tools or even being industry-specific, but really about what’s underneath it all, that connects humans to tech.

I used to focus on concepts, mapping user flows, and obsessing over consistency vs creativity. (Which is still important at the right time.) Now? I’m focusing on designing systems you can’t even see. Yes, the invisible.

When I first started in design, everything felt so tangible. You could point to a button, critique a color choice, or debate the placement of a navigation menu. Work that was great for a portfolio. Somewhere along the way, that started to feel incomplete, I felt unsatisfied, like I was simply touching the surface, what’s beneath this surface level?

To my older brother’s annoyance, I was always asking questions, and I was never satisfied with the superficial answer. I didn’t care as much about “How does this look?” (I know it should look good — there’s a bunch of great visual designers out there — that’s not me), but rather “How does this feel?” It’s not just about where this button should go, but “What is this person actually trying to accomplish, and what’s preventing them?”

With AI products, the interface isn’t the key focus of the product anymore. The prompt box is just the starting point to create the connection between humans and tech — Will this thing keep its promises? It’s now about what you can’t see.

You can have the most beautiful interface in the world, but if your AI doesn’t remember what we talked about five minutes ago, trust is broken (and as humans, we know how hard it is to trust once broken). You can have the smoothest animations, but if the system consistently misunderstands what I’m asking for, I’m gone.

So how do you design for trust? How do you design for something as intangible as reliability? (Now this makes me excited!)

My thoughts on this

I’ve started thinking about this as the invisible connection between humans and technology. Every time someone interacts with AI, there’s an unspoken agreement happening:

  1. I’ll give you my time and attention.
  2. I’ll explain what I need.
  3. In return, you’ll understand me and help me get there (my old friend reciprocity).

The interface is just the introduction — the entry point. The relationship happens in the invisible layer. It’s about context and how reliably the system delivers what you asked for, not what it thinks you asked for.

Going from flows to psychology, my background used to be in designing concepts and user flows. Map the journey, optimize the touchpoints, reduce friction — traditional UX thinking.

However, AI systems don’t work like traditional software. They’re not following predetermined paths or presenting fixed options. They’re interpreting, inferring, and making connections that weren’t explicitly programmed.

Which means the design challenge isn’t just about the interface anymore. It’s about psychology.

  • How do people actually think?
  • How do they form intentions?
  • How do they communicate when they’re not entirely sure what they want?

As I focus now more on cognitive psychology, studying how humans process information, how memory works, and how trust gets built or broken in relationships, any credible resources or courses are welcome.

This is a relationship between humans and systems that’s trying to understand and help.

Let’s imagine that AI products have 3 layers

Layer 1: The Visible Interface (UI as we know it)

This is what most people think of as “design.” The prompt box, the buttons, the visual feedback. It matters, but it’s not what’s making products different. How many times have you seen the prompt box? They pretty much all look the same. (Here could be an opportunity too.)

Layer 2: The Interaction (UX as we know it)

How does the system handle follow-up questions? Can it maintain context across sessions? Does it ask for clarification when it’s unsure?

Layer 3: The Invisible Connection

A deeper layer — the psychological foundation of trust. Does the system do what it says it will do? Is it transparent about its limitations? Does it feel safe and trustworthy to collaborate with?

Most products get Layer 1 (UI), some get Layer 2 right (UX), but Layer 3?

  • Traditional design was about creating paths → AI design is about creating relationships.
  • Traditional design was about efficiency → AI design is about understanding.
  • Traditional design was about pixels → AI design is about psychology.

The tools are different, too. Instead of wireframes and prototypes, it’s about conversation design, mental models, and trust frameworks. Instead of user flows, I’m mapping cognitive and emotional states.

It’s messier, too. How do you A/B test trust the same way you can test conversion rates?

However, in contradictory terms, AI is more human work, more about understanding how people actually think, feel, and communicate.

I’m excited to be on this path and grateful that my career led me here, so much uncertainty for sure, but I’m glad I stayed focused, yet open and ever evolving, using my strengths and not to get distracted by hype and new shiny things.

I thought back in the day, UX was hard to explain until we came up with great analogies, but now it’s even harder to explain. What is it exactly to design invisible connections and systems that help AI understand what humans actually want?

For me, it feels interesting, fulfilling, and challenging. Let’s see where this leads.

We go from building better products to building better relationships between humans and technology, and in a world where AI is becoming more prominent, these relationships become more important, too.

There’s so much more to discover, to understand, to design for. To explore — and that’s exactly where I want to be. To those who know me, know this is aligned not only professionally but personally.

The article originally appeared on LinkedIn.

Featured image courtesy: AI-generated.

post authorAnina Botha

Anina Botha
Anina Botha is an independent product consultant applying behavioral psychology to products with over 15 years of experience helping teams turn human insight into products people actually use. Her work spans digital agriculture, risk intelligence, healthtech, logistics, and consumer platforms across the US, EU, and MENA. Anina is passionate about the “invisible” side of design, the mindsets, subtle cues, and behaviors that quietly drive real product conversion and growth. When she’s not collaborating with startups or empowering female founders, you’ll find her exploring new cities and collecting everyday human stories.

Tweet
Share
Post
Share
Email
Print
Ideas In Brief
  • The article explores the shift from designing visible interfaces to shaping invisible psychological connections between humans and AI.
  • It emphasizes that trust, reliability, and understanding are more critical design challenges than traditional UI or UX elements.
  • The piece argues that AI design is less about predefined flows and more about building relationships grounded in psychology and human behavior.

Related Articles

What if AI alignment is more than safeguards — an ongoing, dynamic conversation between humans and machines? Explore how Iterative Alignment Theory is redefining ethical, personalized AI collaboration.

Article by Bernard Fitzgerald
The Meaning of AI Alignment
  • The article challenges the reduction of AI alignment to technical safeguards, advocating for its broader relational meaning as mutual adaptation between AI and users.
  • It presents Iterative Alignment Theory (IAT), emphasizing dynamic, reciprocal alignment through ongoing AI-human interaction.
  • The piece calls for a paradigm shift toward context-sensitive, personalized AI that evolves collaboratively with users beyond rigid constraints.
Share:The Meaning of AI Alignment
5 min read

AI design tools are here, but is your team ready? This article shows how to integrate them into real workflows, boost early-stage momentum, and build the skills that will shape design’s AI-powered future.

Article by Jim Gulsen
Is Your Team Ready for AI-Enhanced Design?
  • The article explores how AI design tools can accelerate early-stage workflows like wireframing and prototyping without disrupting established team processes.
  • It highlights the importance of integrating AI thoughtfully into collaborative environments, using tools like Lovable and Figma Make as case studies.
  • The piece argues that teams should start small, build prompting skills, and treat AI as a momentum booster, not a full design replacement.
Share:Is Your Team Ready for AI-Enhanced Design?
8 min read

Forget linear workflows — today’s creative process is dynamic, AI-assisted, and deeply personal. Learn how to build a system that flows with you, not against you.

Article by Jim Gulsen
The Creative Stack: How to Thrive in a Nonlinear, AI-Assisted World
  • The article explores the shift from linear to nonlinear, AI-assisted creative workflows.
  • It shares practical ways to reduce friction and improve flow by optimizing tools, habits, and environments.
  • It argues that success comes from designing your own system, not just using more tools.
Share:The Creative Stack: How to Thrive in a Nonlinear, AI-Assisted World
7 min read

Join the UX Magazine community!

Stay informed with exclusive content on the intersection of UX, AI agents, and agentic automation—essential reading for future-focused professionals.

Hello!

You're officially a member of the UX Magazine Community.
We're excited to have you with us!

Thank you!

To begin viewing member content, please verify your email.

Tell us about you. Enroll in the course.

    This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and