I’ve been thinking a lot about how much has changed in my career and what I do.
For me, it never has been about being an expert at tools or even being industry-specific, but really about what’s underneath it all, that connects humans to tech.
I used to focus on concepts, mapping user flows, and obsessing over consistency vs creativity. (Which is still important at the right time.) Now? I’m focusing on designing systems you can’t even see. Yes, the invisible.
When I first started in design, everything felt so tangible. You could point to a button, critique a color choice, or debate the placement of a navigation menu. Work that was great for a portfolio. Somewhere along the way, that started to feel incomplete, I felt unsatisfied, like I was simply touching the surface, what’s beneath this surface level?
To my older brother’s annoyance, I was always asking questions, and I was never satisfied with the superficial answer. I didn’t care as much about “How does this look?” (I know it should look good — there’s a bunch of great visual designers out there — that’s not me), but rather “How does this feel?” It’s not just about where this button should go, but “What is this person actually trying to accomplish, and what’s preventing them?”
With AI products, the interface isn’t the key focus of the product anymore. The prompt box is just the starting point to create the connection between humans and tech — Will this thing keep its promises? It’s now about what you can’t see.
You can have the most beautiful interface in the world, but if your AI doesn’t remember what we talked about five minutes ago, trust is broken (and as humans, we know how hard it is to trust once broken). You can have the smoothest animations, but if the system consistently misunderstands what I’m asking for, I’m gone.
So how do you design for trust? How do you design for something as intangible as reliability? (Now this makes me excited!)
My thoughts on this
I’ve started thinking about this as the invisible connection between humans and technology. Every time someone interacts with AI, there’s an unspoken agreement happening:
- I’ll give you my time and attention.
- I’ll explain what I need.
- In return, you’ll understand me and help me get there (my old friend reciprocity).
The interface is just the introduction — the entry point. The relationship happens in the invisible layer. It’s about context and how reliably the system delivers what you asked for, not what it thinks you asked for.
Going from flows to psychology, my background used to be in designing concepts and user flows. Map the journey, optimize the touchpoints, reduce friction — traditional UX thinking.
However, AI systems don’t work like traditional software. They’re not following predetermined paths or presenting fixed options. They’re interpreting, inferring, and making connections that weren’t explicitly programmed.
Which means the design challenge isn’t just about the interface anymore. It’s about psychology.
- How do people actually think?
- How do they form intentions?
- How do they communicate when they’re not entirely sure what they want?
As I focus now more on cognitive psychology, studying how humans process information, how memory works, and how trust gets built or broken in relationships, any credible resources or courses are welcome.
This is a relationship between humans and systems that’s trying to understand and help.
Let’s imagine that AI products have 3 layers
Layer 1: The Visible Interface (UI as we know it)
This is what most people think of as “design.” The prompt box, the buttons, the visual feedback. It matters, but it’s not what’s making products different. How many times have you seen the prompt box? They pretty much all look the same. (Here could be an opportunity too.)
Layer 2: The Interaction (UX as we know it)
How does the system handle follow-up questions? Can it maintain context across sessions? Does it ask for clarification when it’s unsure?
Layer 3: The Invisible Connection
A deeper layer — the psychological foundation of trust. Does the system do what it says it will do? Is it transparent about its limitations? Does it feel safe and trustworthy to collaborate with?
Most products get Layer 1 (UI), some get Layer 2 right (UX), but Layer 3?
- Traditional design was about creating paths → AI design is about creating relationships.
- Traditional design was about efficiency → AI design is about understanding.
- Traditional design was about pixels → AI design is about psychology.
The tools are different, too. Instead of wireframes and prototypes, it’s about conversation design, mental models, and trust frameworks. Instead of user flows, I’m mapping cognitive and emotional states.
It’s messier, too. How do you A/B test trust the same way you can test conversion rates?
However, in contradictory terms, AI is more human work, more about understanding how people actually think, feel, and communicate.
I’m excited to be on this path and grateful that my career led me here, so much uncertainty for sure, but I’m glad I stayed focused, yet open and ever evolving, using my strengths and not to get distracted by hype and new shiny things.
I thought back in the day, UX was hard to explain until we came up with great analogies, but now it’s even harder to explain. What is it exactly to design invisible connections and systems that help AI understand what humans actually want?
For me, it feels interesting, fulfilling, and challenging. Let’s see where this leads.
We go from building better products to building better relationships between humans and technology, and in a world where AI is becoming more prominent, these relationships become more important, too.
There’s so much more to discover, to understand, to design for. To explore — and that’s exactly where I want to be. To those who know me, know this is aligned not only professionally but personally.
The article originally appeared on LinkedIn.
Featured image courtesy: AI-generated.