Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Get exclusive access to thought-provoking articles, bonus podcast content, and cutting-edge whitepapers. Become a member of the UX Magazine community today!

Home ›› Companies Aren’t Prepared for Outbound AI in the Hands of Consumers

Companies Aren’t Prepared for Outbound AI in the Hands of Consumers

by Robb Wilson
3 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

People tend to believe that companies are going to use AI to eliminate as many jobs as possible. It stands to reason that some businesses will try this approach—even though it’s a complete misuse of the technology. What we’re currently seeing, however, is individuals picking up generative tools and running with them, while companies are dragging their feet into integration efforts.

What might happen as a result of this is that consumers will be the ones to bring down companies. There are laws that prevent companies from spamming people with unwanted outbound messages, but there are none stopping consumers from flooding contact centers with AI agents.

It’s basically free for people to cobble together agents that can robocall service centers and flood systems with data designed to get them discounts, or worse, to confuse and deceive. Customers might start hammering a company because word gets out that they give a credit for certain circumstances. This could create a snowball effect where their call centers are flooded with millions of inbound inquiries that are lined up to keep calling, all day long.

Whatever their intentions, it’s free and easy for consumers to scale ad hoc efforts to levels that will overwhelm a company’s resources. So what are companies going to do when their customers go outbound with AI? 

I asked this question recently on the London Fintech Podcast and the host, Tony Clark, had the response I’ve been looking for: “You may have let the genie out of the bottle now, Robb,” he said, looking a bit shocked. “I’m sure the tech is available. I imagine my 14-year-old could probably hook up 11 Labs or something with the GPT store and be off on something like that.”

The truth is, most companies that are evaluating agentic AI are thinking myopically about how they will use these tools offensively. They are ignoring the urgent need for agentic systems that can provide defensive solutions. 

These systems must allow AI agents to detect and stop conversations that are just meant to burn tokens. They need human-in-the-loop (HitL) functionality to make sure agents’ objectives are validated by a person who takes responsibility for the outcomes. This environment also needs canonical knowledge—a dynamic knowledge base that can serve as a source-of-truth for AI agents and humans.

These are the base requirements of an agent runtime. A critical component to integration, an agent runtime is an environment for building, testing, deploying, and evolving AI agents.

  • Runtimes maintain agent memory and goals across interactions
  • Runtimes enables access to external tools like MCPs, APIs, and databases
  • Runtimes allow multi-agent coordination
  • Runtimes operates continuously in the background

And in terms of helping businesses use AI defensively, runtimes handle input/output across modalities like text and voice, so AI agents can spot bad actors and alert humans. In UX terms, it’s the backstage infrastructure that transforms your product’s assistant from a button-press chatbot into a collaborative, contextual, goal-oriented experience designed that can proactively protect organizations and their customers. However companies choose to frame it, there’s emerging risk in sitting back and waiting to see what will happen next with AI. It just might be the end of your company.

post authorRobb Wilson

Robb Wilson

Robb Wilson is the CEO and co-founder of OneReach.ai, a leading conversational AI platform powering over 1 billion conversations per year. He also co-authored The Wall Street Journal bestselling business book, Age of Invisible Machines. An experience design pioneer with over 20 years of experience working with artificial intelligence, Robb lives with his family in Berkeley, Calif.

Tweet
Share
Post
Share
Email
Print

Related Articles

Unpack how dark patterns manipulate users, why they’re becoming a legal issue, and what ethical designers can do about it.

Article by Tushar Deshmukh
Dark Patterns: When Design Crosses the Line
  • The article makes a clear case: dark patterns aren’t accidents but deliberate design decisions that put business gains over people.
  • The piece reminds us that no short-term conversion bump is worth losing user trust for good.
Share:Dark Patterns: When Design Crosses the Line
7 min read

Learn about common Agile anti-patterns. Lessons from Laura Klein.

Article by Paivi Salminen
Unhappy Agile Teams Are Unhappy in Familiar Ways
  • The article makes a sharp point: struggling Agile teams love to think their problems are unique. They rarely are.
  • It breaks down the traps that quietly kill Agile teams, like endless feature shipping, siloed workflows, and design treated as an afterthought.
  • The piece reminds us that looking Agile and actually being Agile are two very different things.
Share:Unhappy Agile Teams Are Unhappy in Familiar Ways
6 min read

Take a hard look at the fine line between good design and digital dependency.

Article by Tushar Deshmukh
Designing for Dependence: When UX Turns Tools into Traps
  • The article reveals how digital products are no longer just tools. They’re engineered to keep you hooked, often without you realizing it.
  • It challenges designers to ask: Are we building products that serve people, or ones that quietly exploit them?
  • The piece highlights that ethical design isn’t about removing persuasion. It’s about being honest and giving users the freedom to walk away.
Share:Designing for Dependence: When UX Turns Tools into Traps
8 min read

Join the UX Magazine community!

Stay informed with exclusive content on the intersection of UX, AI agents, and agentic automation—essential reading for future-focused professionals.

Hello!

You're officially a member of the UX Magazine Community.
We're excited to have you with us!

Thank you!

To begin viewing member content, please verify your email.

Get Paid to Test AI Products

Earn an average of $100 per test by reviewing AI-first product experiences and sharing your feedback.

    Tell us about you. Enroll in the course.

      This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and