Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Get exclusive access to thought-provoking articles, bonus podcast content, and cutting-edge whitepapers. Become a member of the UX Magazine community today!

Home ›› Artificial Intelligence ›› Designing the Invisible between humans and technology: My Journey Blending Design and Behavioral Psychology

Designing the Invisible between humans and technology: My Journey Blending Design and Behavioral Psychology

by Anina Botha
4 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

As AI becomes increasingly integrated into our lives, the role of design is shifting from crafting polished interfaces to shaping the invisible relationships between humans and technology. This article explores how trust, reliability, and psychology now define the true user experience, where success is measured not by flawless screens but by systems that understand, remember, and support us in meaningful ways.

I’ve been thinking a lot about how much has changed in my career and what I do.

For me, it never has been about being an expert at tools or even being industry-specific, but really about what’s underneath it all, that connects humans to tech.

I used to focus on concepts, mapping user flows, and obsessing over consistency vs creativity. (Which is still important at the right time.) Now? I’m focusing on designing systems you can’t even see. Yes, the invisible.

When I first started in design, everything felt so tangible. You could point to a button, critique a color choice, or debate the placement of a navigation menu. Work that was great for a portfolio. Somewhere along the way, that started to feel incomplete, I felt unsatisfied, like I was simply touching the surface, what’s beneath this surface level?

To my older brother’s annoyance, I was always asking questions, and I was never satisfied with the superficial answer. I didn’t care as much about “How does this look?” (I know it should look good — there’s a bunch of great visual designers out there — that’s not me), but rather “How does this feel?” It’s not just about where this button should go, but “What is this person actually trying to accomplish, and what’s preventing them?”

With AI products, the interface isn’t the key focus of the product anymore. The prompt box is just the starting point to create the connection between humans and tech — Will this thing keep its promises? It’s now about what you can’t see.

You can have the most beautiful interface in the world, but if your AI doesn’t remember what we talked about five minutes ago, trust is broken (and as humans, we know how hard it is to trust once broken). You can have the smoothest animations, but if the system consistently misunderstands what I’m asking for, I’m gone.

So how do you design for trust? How do you design for something as intangible as reliability? (Now this makes me excited!)

My thoughts on this

I’ve started thinking about this as the invisible connection between humans and technology. Every time someone interacts with AI, there’s an unspoken agreement happening:

  1. I’ll give you my time and attention.
  2. I’ll explain what I need.
  3. In return, you’ll understand me and help me get there (my old friend reciprocity).

The interface is just the introduction — the entry point. The relationship happens in the invisible layer. It’s about context and how reliably the system delivers what you asked for, not what it thinks you asked for.

Going from flows to psychology, my background used to be in designing concepts and user flows. Map the journey, optimize the touchpoints, reduce friction — traditional UX thinking.

However, AI systems don’t work like traditional software. They’re not following predetermined paths or presenting fixed options. They’re interpreting, inferring, and making connections that weren’t explicitly programmed.

Which means the design challenge isn’t just about the interface anymore. It’s about psychology.

  • How do people actually think?
  • How do they form intentions?
  • How do they communicate when they’re not entirely sure what they want?

As I focus now more on cognitive psychology, studying how humans process information, how memory works, and how trust gets built or broken in relationships, any credible resources or courses are welcome.

This is a relationship between humans and systems that’s trying to understand and help.

Let’s imagine that AI products have 3 layers

Layer 1: The Visible Interface (UI as we know it)

This is what most people think of as “design.” The prompt box, the buttons, the visual feedback. It matters, but it’s not what’s making products different. How many times have you seen the prompt box? They pretty much all look the same. (Here could be an opportunity too.)

Layer 2: The Interaction (UX as we know it)

How does the system handle follow-up questions? Can it maintain context across sessions? Does it ask for clarification when it’s unsure?

Layer 3: The Invisible Connection

A deeper layer — the psychological foundation of trust. Does the system do what it says it will do? Is it transparent about its limitations? Does it feel safe and trustworthy to collaborate with?

Most products get Layer 1 (UI), some get Layer 2 right (UX), but Layer 3?

  • Traditional design was about creating paths → AI design is about creating relationships.
  • Traditional design was about efficiency → AI design is about understanding.
  • Traditional design was about pixels → AI design is about psychology.

The tools are different, too. Instead of wireframes and prototypes, it’s about conversation design, mental models, and trust frameworks. Instead of user flows, I’m mapping cognitive and emotional states.

It’s messier, too. How do you A/B test trust the same way you can test conversion rates?

However, in contradictory terms, AI is more human work, more about understanding how people actually think, feel, and communicate.

I’m excited to be on this path and grateful that my career led me here, so much uncertainty for sure, but I’m glad I stayed focused, yet open and ever evolving, using my strengths and not to get distracted by hype and new shiny things.

I thought back in the day, UX was hard to explain until we came up with great analogies, but now it’s even harder to explain. What is it exactly to design invisible connections and systems that help AI understand what humans actually want?

For me, it feels interesting, fulfilling, and challenging. Let’s see where this leads.

We go from building better products to building better relationships between humans and technology, and in a world where AI is becoming more prominent, these relationships become more important, too.

There’s so much more to discover, to understand, to design for. To explore — and that’s exactly where I want to be. To those who know me, know this is aligned not only professionally but personally.

The article originally appeared on LinkedIn.

Featured image courtesy: AI-generated.

post authorAnina Botha

Anina Botha
Anina Botha is an independent product consultant applying behavioral psychology to products with over 15 years of experience helping teams turn human insight into products people actually use. Her work spans digital agriculture, risk intelligence, healthtech, logistics, and consumer platforms across the US, EU, and MENA. Anina is passionate about the “invisible” side of design, the mindsets, subtle cues, and behaviors that quietly drive real product conversion and growth. When she’s not collaborating with startups or empowering female founders, you’ll find her exploring new cities and collecting everyday human stories.

Tweet
Share
Post
Share
Email
Print
Ideas In Brief
  • The article explores the shift from designing visible interfaces to shaping invisible psychological connections between humans and AI.
  • It emphasizes that trust, reliability, and understanding are more critical design challenges than traditional UI or UX elements.
  • The piece argues that AI design is less about predefined flows and more about building relationships grounded in psychology and human behavior.

Related Articles

AI didn’t just change work — it removed the starting point. This piece explores what happens when early-career jobs vanish, and why the most “future-proof” skills might be the oldest ones.

Article by Pavel Bukengolts
AI, Early-Career Jobs, and the Return to Thinking
  • The article illustrates how AI is quickly taking over beginner-level jobs that involve routine work.
  • The piece argues that the skills that remain most valuable are human ones, like critical thinking, communication, big-picture understanding, and ethics.
  • It suggests that companies must decide whether to replace junior staff with AI or use AI to help train and support them.
Share:AI, Early-Career Jobs, and the Return to Thinking
5 min read

Discover how human-centered UX design is transforming medtech by cutting costs, reducing errors, and driving better outcomes for clinicians, patients, and healthcare providers alike.

Article by Dennis Lenard
How UX Design is Revolutionising Medtech Cost Efficiency
  • The article explains how strategic UX design in medtech improves cost efficiency by enhancing usability, reducing training time, and minimizing user errors across clinical workflows.
  • The piece argues that intuitive, user-centered interfaces boost productivity, adoption rates, and patient outcomes while lowering support costs and extending product lifecycles, making UX a crucial investment for sustainable growth and ROI in healthcare technology.
Share:How UX Design is Revolutionising Medtech Cost Efficiency
7 min read

Discover how the future of AI runs on purpose-built infrastructure.

Article by UX Magazine Staff
AI Agent Runtimes in Dedicated Lanes: Lessons from China’s EV Roads
  • The article states that AI’s progress depends less on creating larger models and more on developing specialized “lanes” (agent runtimes) where AI can run safely and efficiently.
  • It argues that, like China’s EV-only highways, these runtimes are designed for smooth flow, constant energy (through memory and context), and safe, reliable operation, much like EV-only highways in China.
  • The piece concludes that building this kind of infrastructure takes effort and oversight, but it enables AI systems to work together, grow, and improve sustainably.
Share:AI Agent Runtimes in Dedicated Lanes: Lessons from China’s EV Roads
4 min read

Join the UX Magazine community!

Stay informed with exclusive content on the intersection of UX, AI agents, and agentic automation—essential reading for future-focused professionals.

Hello!

You're officially a member of the UX Magazine Community.
We're excited to have you with us!

Thank you!

To begin viewing member content, please verify your email.

Get Paid to Test AI Products

Earn an average of $100 per test by reviewing AI-first product experiences and sharing your feedback.

    Tell us about you. Enroll in the course.

      This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and