Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Get exclusive access to thought-provoking articles, bonus podcast content, and cutting-edge whitepapers. Become a member of the UX Magazine community today!

Home ›› AI Agents ›› Secrets of Agentic UX: Emerging Design Patterns for Human Interaction with AI Agents

Secrets of Agentic UX: Emerging Design Patterns for Human Interaction with AI Agents

by Greg Nudelman
10 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

As AI becomes more embedded in products, designers must understand what these systems can truly do. This article introduces a practical framework built around four core capabilities — perception, reasoning, memory, and agency — to help UX professionals design smarter, more trustworthy AI experiences. With real-world examples and actionable tips, it’s a must-read for anyone shaping the future of AI-powered interfaces.

By many accounts, AI Agents are already here, but they are just not evenly distributed. However, few examples yet exist of what a good user experience of interacting with that near-futuristic incarnation of AI might look like. Fortunately, at the recent AWS Re: Invent conference, I came upon an excellent example of what the UX of interacting with AI Agents might look like, and I am eager to share that vision with you in this article. But first, what exactly are AI Agents?

What are AI Agents?

Imagine an ant colony. In a typical ant colony, you have different specialties of ants: workers, soldiers, drones, queens, etc. Every ant in a colony has a different job — they operate independently yet as part of a cohesive whole. You can “hire” an individual ant (Agent) to do some simple semi-autonomous job for you, which in itself is pretty cool. However, try to imagine that you can hire the entire ant hill to do something much more complex or interesting: figure out what’s wrong with your system, book your trip, or …Do pretty much anything a human can do in front of a computer. Each ant on their own is not very smart — they are instead highly specialized to do a particular job. However, put together, different specialties of ants present a kind of “collective intelligence” that we associate with higher-order animals. The most significant difference between “AI,” as we’ve been using the term in the blog, and AI Agents is autonomy. You don’t need to give an AI Agent precise instructions or wait for synchronized output — the entire interaction with a set of AI Agents is much more fluid and flexible, much like an ant hill would approach solving a problem.

UX for AI: A Framework for Designing AI-Driven Products (Wiley, 2025). Image by Greg Nudelman

How do AI Agents work?

There are many different ways that agentic AI might work — it’s an extensive topic worthy of its own book (perhaps in a year or two). In this article, we will use an example of troubleshooting a problem on a system as an example of a complex flow involving a Supervisor Agent (also called “Reasoning Agent”) and some Worker Agents. The flow starts when a human operator receives an alert about a problem. They launch an investigation, and a team of semi-autonomous AI Agents led by a supervisory Agent helps them find the root cause and make recommendations about how to fix the problem. Let’s break down the process of interacting with AI Agents in a step diagram:

Multi-stage agentic AI flow. Image by Greg Nudelman

A multi-stage agentic workflow pictured above has the following steps:

  1. A human operator issues a general request to a Supervisor AI Agent.
  2. Supervisor AI Agent then spins up and issues general requests to several specialized semi-autonomous Worker AI Agents that start investigating various parts of the system, looking for the root cause (Database).
  3. Worker Agents bring back findings to the Supervisor Agent, which collates them as Suggestions for the human operator.
  4. Human operator accepts or rejects various Suggestions, which causes the Supervisor Agent to spin up additional Workers to investigate (Cloud).
  5. After some time going back and forth, the Supervisor Agent produces a Hypothesis about the Root Cause and delivers it to the human operator.

Just like in the case of contracting a typical human organization, a Supervisor AI Agent has a team of specialized AI Agents at their disposal. The Supervisor can route a message to any of the AI Worker Agents under its supervision, who will do the task and communicate back to the Supervisor. The Supervisor may choose to assign the task to a specific Agent and send additional instructions at a later time when more information becomes available. Finally, when the task is complete, the output is communicated back to the user. A human operator then has the option to give feedback or additional tasks to the Supervising AI Agent, in which case the entire process begins again.

The human does not need to worry about any of the internal stuff — all that is handled in a semi-autonomous manner by the Supervisor. All the human does is state a general request, then review and react to the output of this agentic “organization.” This is exactly how you would communicate with an ant colony if you could do such a thing: you would assign the job to the queen and have her manage all of the workers, soldiers, drones, and the like. And much like in the ant colony, the individual specialized Agent does not need to be particularly smart or to communicate with the human operator directly — they need only to be able to semi-autonomously solve the specialized task they are designed to perform and be able to pass precise output back to the Supervisor Agent, and nothing more. It is the job of the Supervisor Agent to do all of the reasoning and communication. This AI model is more efficient, cheaper, and highly practical for many tasks. Let’s take a look at the interaction flow to get a better feel for what this experience is like in the real world.

Use case: CloudWatch investigation with AI Agents

For simplicity, we will follow the workflow diagram earlier in the article, with each step in the flow matching that in the diagram. This example comes from AWS Re: Invent 2024 — Don’t get stuck: How connected telemetry keeps you moving forward (COP322), by AWS Events on YouTube, starting at 53 minutes.

Step 1

The process starts when the user finds a sharp increase in faults in a service called “bot-service” (top left in the screenshot) and launches a new investigation. The user then passes all of the pertinent information and perhaps some additional instructions to the Supervisor Agent.

Step 1: Human Operator launches a new investigation. Image Source: AWS via YouTube

Step 2

Now, in Step 2, the Supervisor Agent receives the request and spawns a bunch of Worker AI Agents that will be semi-autonomously looking at different parts of the system. The process is asynchronous, meaning the initial state of suggestions on the right is empty: findings do not come immediately after the investigation is launched.

Step 2: Supervisor Agent launches Worker Agents that take some time to report back. Image Source: AWS via YouTube

Step 3

Now the Worker Agents come back with some “suggested observations” that are processed by the Supervisor and added to the Suggestions on the right side of the screen. Note that the right side of the screen is now wider to allow for easier reading of the agentic suggestions. In the screen below, two very different observations are suggested by different Agents, the first one specializing in the service metrics and the second one specializing in tracing.

Step 3: Worker Agents come back with suggested observations that may pertain to the problem experienced by the system. Image Source: AWS via YouTube

These “suggested observations” form the “evidence” in the investigation that is targeted at finding the root cause of the problem. To figure out the root cause, the human operator in this flow helps out: they respond back to the Supervisor Agent to tell it which of these observations are most relevant. Thus, the Supervisor Agent and human work side by side to collaboratively figure out the root cause of the problem.

Step 4

The human operator responds by clicking “Accept” on the observations they find relevant, and those are added to the investigation “case file” on the left side of the screen. Now that the humans have added some feedback to indicate the information they find relevant, the agentic process kicks in the next phase of the investigation. Now that the Supervisor Agent has received the user feedback, they will stop sending “more of the same” but instead will dig deeper and perhaps investigate a different aspect of the system as they search for the root cause. Note in the image below that the new suggestions now coming in on the right are of a different type — these are now looking at logs for a root cause.

Step 4: After user feedback, the Agents look deeper and come back with different suggestions. Image Source: AWS via YouTube

Step 5

Finally, the Supervisor Agent has enough information to take a stab at identifying the root cause of the problem. Hence, it switches from evidence gathering to reasoning about the root cause. In steps 3 and 4, the Supervisor Agent was providing “suggested observations.” Now, in Step 5, it is ready for a big reveal (the “denouement scene,” if you will) so, like a literary detective, the Supervisor Agent delivers its “Hypothesis suggestion.” (This is reminiscent of the game “Clue” where the players take turns making “suggestions,” and then, when they are ready to pounce, they make an “accusation.” The Supervisor Agent is doing the same thing here!)

Step 5: Supervisor Agent is now ready to point out the culprit of the “crime.” Image Source: AWS via YouTube

The suggested hypothesis is correct, and when the user clicks “accept,” the Supervisor Agent helpfully provides the next steps to fix the problem and prevent future issues of a similar nature. The Agent almost seems to wag a finger at the human by suggesting that they “implement proper change management procedures” — the foundation of any good system hygiene!

Supervisor Agent also provides the next steps to fix the problem and prevent it in the future. Image Source: AWS via YouTube

Final thoughts

There are many reasons why agentic flows are highly compelling and are a focus of so much AI development work today. Agents are compelling, economical, and allow for a much more natural and flexible human-machine interface, where the Agents fill the gaps left by a human and vice versa, literally becoming a mind-meld of human and a machine, a super-human “Augmented Intelligence,” which is much more than the sum of its parts. However, getting the most value from interacting with agents also requires drastic changes in how we think about AI and how we design user interfaces that need to support agentic interactions:

  • Flexible, adjustable UI: Agents work alongside humans, to do that, AI Agents require a flexible workflow that supports continuous interactions between humans and machines across multiple stages — starting investigation, accepting evidence, forming a hypothesis, providing next steps, etc. It’s a Flexible looping flow crossing multiple iterations.
  • Autonomy: while, for now, human-in-the-loop seems to be the norm for agentic workflows, Agents show remarkable abilities to come up with hypotheses, gather evidence, and iterate the hypothesis as needed until they solve the problem. They do not get tired or run out of options and give up. AI Agents also show the ability to effectively “write code… a tool building its own tool” to explore novel ways to solve problems — this is new. This kind of interaction by nature requires an “aggressive” AI, e.g., these Agents are trained on maximum Recall, open to trying every possibility to ensure the most true positive outcomes (see our Value Matrix discussion here.) This means that sometimes the Agents will take an action “just to try it” without “thinking” about the cost of false positive or false negative outcomes. For example, an aggressive AI Agent “doctor” might prescribe an invasive brain cancer biopsy procedure without considering lower-risk alternatives first or even stopping to get the patient’s consent! All this requires a deeper level of human and machine analysis and multiple new approval flows for aggressive AI “exploration ideas” that might lead to human harm or simply balloon the out-of-budget costs.
  • New controls are required: while much of the interaction can be accomplished with existing screens, the majority of Agent actions are asynchronous, which means that most web pages with the traditional transactional, synchronous request/response models are a poor match for this new kind of interaction. We are going to need to introduce some new design paradigms. For example, start, stop, and pause buttons are a good starting point for controlling the agentic flow, as otherwise you run a very real risk of ending up with the “The Sorcerer’s Apprentice” situation from Fantasia (with self-replicating brooms fetching water without stopping, creating a huge, expensive mess).
  • You “hire” AI to perform a task: this is a radical departure from traditional tool use. These are no longer tools, they are reasoning entities, intelligent in their own ways. AI service already consists of multiple specialized Agents monitored by a Supervisor. Very soon, we will introduce multiple levels of management with sub-supervisors and “team leads” reporting to the final “account executive Agent” that deals with humans… Just as human organizations do today. Up to now, organizations needed to track Products, People, and Processes. Now, we are adding a new definition of “people” — AI Agents. That means developing workable UIs for safeguarding confidential information, Role-Based Access Control (RBAC), and Agent versioning. Safeguarding the agentic data is going to be even more important than signing NDAs with your human staff.
  • Continuously Learning Systems: to get full value out of Agents, they need continuous learning. Agents learn, quickly becoming experts in whatever systems they work with. The initial Agent, just like a new intern, will know very little, but they will quickly become the “adult in the room” with more access and more experience than most humans. This will have the effect of creating a massive power shift in the workplace. We need to be ready.

Regardless of how you feel about AI Agents, it is clear that they are here to stay and evolve alongside their human counterparts. It is, therefore, essential that we understand how agentic AIs work and how to design systems that allow us to work with them safely and productively, emphasizing the best of what humans and machines can bring to the table.

The article originally appeared on UX for AI.

Featured image courtesy: Greg Nudelman.

post authorGreg Nudelman

Greg Nudelman
Greg Nudelman is a UX Designer, Strategist, Speaker, and Author. For over 20 years, he has been helping his Fortune 100 clients like Cisco, IBM, and Intuit to create loyal customers and generate $100s of millions in additional valuation. A veteran of 35 AI projects, Greg is currently a Distinguished Designer at Sumo Logic, creating innovative AI/ML solutions for Security, Network, and Cloud Monitoring. Greg presented 120+ keynotes and workshops in 18 countries and authored 5 UX books and 24 patents. His latest book, “UX for AI,”  is shipping May 13, 2025. More info at: https://UXforAI.com.

Tweet
Share
Post
Share
Email
Print
Ideas In Brief
  • The article examines how UX designers can effectively work with AI agents by understanding the four key capability types that shape agent behavior and user interaction.
  • It emphasizes the importance of evaluating an AI agent’s perception, reasoning, action, and learning abilities early in the design process to create experiences that are realistic, ethical, and user-centered.
  • The piece provides practical frameworks and examples — from smart home devices to healthcare bots — to help designers ask the right questions, collaborate cross-functionally, and scope AI use responsibly.

Related Articles

Unlock the secret to truly innovative UX by looking beyond the screen. This article reveals how inspiration from architecture, nature, and physical design can elevate your digital creations, making them more intuitive, user-centered, and creatively inspired. Step outside the digital world to spark new ideas and transform your UX design process.

Article by Rodolpho Henrique
The Secret to Innovative UX: Look Beyond the Digital World
  • The article explores how UX designers can draw inspiration from the analog world, including architecture, nature, and physical product design, to innovate digital experiences.
  • It highlights key design principles such as ergonomics, affordances, and wayfinding that can enhance digital interfaces.
  • The piece emphasizes the importance of stepping beyond the screen to foster creativity, prevent burnout, and create user-centered designs that feel natural and intuitive.
Share:The Secret to Innovative UX: Look Beyond the Digital World
5 min read

What if your brain could merge with a computer? BCIs are revolutionizing healing, learning, and thinking — but with risks like privacy threats and loss of autonomy. Explore the future of merged consciousness and how to harness it wisely.

Article by Oliver Inderwildi
Navigating the Convergence of Mind & Machine: On the Neural Frontier & the Implications of Merged Consciousness
  • The article explores how brain-computer interfaces (BCIs) are pushing the neural frontier, enabling breakthroughs in treating neurological disorders, enhancing human, cognition, and ultimately increasing our understanding of the brain’s functioning.
  • The piece defines the concept of merged consciousness and discusses its ethical and societal risks, including loss of autonomy, data privacy concerns, and potential socioeconomic divides.
  • It highlights the role of neuroplasticity in human-computer interaction, showing how feedback loops from technology accelerate learning and adaptation.
  • It calls for innovative policymaking to balance rapid technological advancements with safeguards, ensuring BCIs benefit humanity without compromising our future
Share:Navigating the Convergence of Mind & Machine: On the Neural Frontier & the Implications of Merged Consciousness
16 min read

Are we on the brink of an AI-first revolution? As more products are built entirely around AI engines, designers must adapt. From dynamic interfaces and non-linear journeys to helping users optimize prompts, discover how the next generation of AI-driven products will reshape UX design.

Article by Tom Rowson
AI-First: Designing the Next Generation of AI Products
  • The article introduces “AI-first” products, designed around AI engines to offer more than just chat interfaces and improve over time.
  • It highlights key challenges for designers: creating flexible interfaces, helping users with prompts, and managing AI errors like hallucinations.
  • The piece stresses the need to adapt to non-linear, iterative user journeys as AI-first apps evolve.
Share:AI-First: Designing the Next Generation of AI Products
4 min read

Join the UX Magazine community!

Stay informed with exclusive content on the intersection of UX, AI agents, and agentic automation—essential reading for future-focused professionals.

Hello!

You're officially a member of the UX Magazine Community.
We're excited to have you with us!

Thank you!

To begin viewing member content, please verify your email.

Tell us about you. Enroll in the course.

    This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and