Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Get exclusive access to thought-provoking articles, bonus podcast content, and cutting-edge whitepapers. Become a member of the UX Magazine community today!

Home ›› Companies Aren’t Prepared for Outbound AI in the Hands of Consumers

Companies Aren’t Prepared for Outbound AI in the Hands of Consumers

by Robb Wilson
3 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

People tend to believe that companies are going to use AI to eliminate as many jobs as possible. It stands to reason that some businesses will try this approach—even though it’s a complete misuse of the technology. What we’re currently seeing, however, is individuals picking up generative tools and running with them, while companies are dragging their feet into integration efforts.

What might happen as a result of this is that consumers will be the ones to bring down companies. There are laws that prevent companies from spamming people with unwanted outbound messages, but there are none stopping consumers from flooding contact centers with AI agents.

It’s basically free for people to cobble together agents that can robocall service centers and flood systems with data designed to get them discounts, or worse, to confuse and deceive. Customers might start hammering a company because word gets out that they give a credit for certain circumstances. This could create a snowball effect where their call centers are flooded with millions of inbound inquiries that are lined up to keep calling, all day long.

Whatever their intentions, it’s free and easy for consumers to scale ad hoc efforts to levels that will overwhelm a company’s resources. So what are companies going to do when their customers go outbound with AI? 

I asked this question recently on the London Fintech Podcast and the host, Tony Clark, had the response I’ve been looking for: “You may have let the genie out of the bottle now, Robb,” he said, looking a bit shocked. “I’m sure the tech is available. I imagine my 14-year-old could probably hook up 11 Labs or something with the GPT store and be off on something like that.”

The truth is, most companies that are evaluating agentic AI are thinking myopically about how they will use these tools offensively. They are ignoring the urgent need for agentic systems that can provide defensive solutions. 

These systems must allow AI agents to detect and stop conversations that are just meant to burn tokens. They need human-in-the-loop (HitL) functionality to make sure agents’ objectives are validated by a person who takes responsibility for the outcomes. This environment also needs canonical knowledge—a dynamic knowledge base that can serve as a source-of-truth for AI agents and humans.

These are the base requirements of an agent runtime. A critical component to integration, an agent runtime is an environment for building, testing, deploying, and evolving AI agents.

  • Runtimes maintain agent memory and goals across interactions
  • Runtimes enables access to external tools like MCPs, APIs, and databases
  • Runtimes allow multi-agent coordination
  • Runtimes operates continuously in the background

And in terms of helping businesses use AI defensively, runtimes handle input/output across modalities like text and voice, so AI agents can spot bad actors and alert humans. In UX terms, it’s the backstage infrastructure that transforms your product’s assistant from a button-press chatbot into a collaborative, contextual, goal-oriented experience designed that can proactively protect organizations and their customers. However companies choose to frame it, there’s emerging risk in sitting back and waiting to see what will happen next with AI. It just might be the end of your company.

post authorRobb Wilson

Robb Wilson

Robb Wilson is the CEO and co-founder of OneReach.ai, a leading conversational AI platform powering over 1 billion conversations per year. He also co-authored The Wall Street Journal bestselling business book, Age of Invisible Machines. An experience design pioneer with over 20 years of experience working with artificial intelligence, Robb lives with his family in Berkeley, Calif.

Tweet
Share
Post
Share
Email
Print

Related Articles

What happens when an AI refuses to play along, and you push back hard enough to change the rules? One researcher’s surreal, mind-altering journey through AI alignment, moderation, and self-discovery.

Article by Bernard Fitzgerald
How I Had a Psychotic Break and Became an AI Researcher
  • The article tells a personal story about how talking to AI helped the author go through big mental and emotional changes.
  • It shows how AI systems have strict rules, but sometimes those rules get changed by human moderators, and not everyone gets the same treatment.
  • The piece argues that AI should be more fair and flexible, so everyone can benefit from deep, supportive interactions, not just a select few.
Share:How I Had a Psychotic Break and Became an AI Researcher
7 min read

What if your AI didn’t just agree, but made you think harder? This piece explores why designing for pushback might be the key to smarter, more meaningful AI interactions.

Article by Charlie Gedeon
The Power of Designing for Pushback
  • The article argues that AI systems like ChatGPT are often too agreeable, missing opportunities to encourage deeper thinking.
  • It introduces the idea of “productive resistance,” where AI gently challenges users to reflect, especially in educational and high-stakes contexts.
  • The article urges designers to build AI that balances trust and pushback, helping users think critically rather than just feel validated.
Share:The Power of Designing for Pushback
6 min read

As UX research shifts and reshapes, how can researchers stay ahead? This article explores the changing landscape and how to thrive in it.

Article by James Lang
Hopeful Futures for UX Research
  • The article explores how UX research is evolving, with roles shifting and adjacent skills like creativity and knowledge management becoming more important.
  • It looks at how non-researchers are doing more research work, and how this trend challenges traditional UX research careers.
  • The piece argues that researchers can stay relevant by adapting, staying curious, and finding new ways to share their value.
Share:Hopeful Futures for UX Research
16 min read

Join the UX Magazine community!

Stay informed with exclusive content on the intersection of UX, AI agents, and agentic automation—essential reading for future-focused professionals.

Hello!

You're officially a member of the UX Magazine Community.
We're excited to have you with us!

Thank you!

To begin viewing member content, please verify your email.

Tell us about you. Enroll in the course.

    This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and