People tend to believe that companies are going to use AI to eliminate as many jobs as possible. It stands to reason that some businesses will try this approach—even though it’s a complete misuse of the technology. What we’re currently seeing, however, is individuals picking up generative tools and running with them, while companies are dragging their feet into integration efforts.
What might happen as a result of this is that consumers will be the ones to bring down companies. There are laws that prevent companies from spamming people with unwanted outbound messages, but there are none stopping consumers from flooding contact centers with AI agents.
It’s basically free for people to cobble together agents that can robocall service centers and flood systems with data designed to get them discounts, or worse, to confuse and deceive. Customers might start hammering a company because word gets out that they give a credit for certain circumstances. This could create a snowball effect where their call centers are flooded with millions of inbound inquiries that are lined up to keep calling, all day long.
Whatever their intentions, it’s free and easy for consumers to scale ad hoc efforts to levels that will overwhelm a company’s resources. So what are companies going to do when their customers go outbound with AI?
I asked this question recently on the London Fintech Podcast and the host, Tony Clark, had the response I’ve been looking for: “You may have let the genie out of the bottle now, Robb,” he said, looking a bit shocked. “I’m sure the tech is available. I imagine my 14-year-old could probably hook up 11 Labs or something with the GPT store and be off on something like that.”
The truth is, most companies that are evaluating agentic AI are thinking myopically about how they will use these tools offensively. They are ignoring the urgent need for agentic systems that can provide defensive solutions.
These systems must allow AI agents to detect and stop conversations that are just meant to burn tokens. They need human-in-the-loop (HitL) functionality to make sure agents’ objectives are validated by a person who takes responsibility for the outcomes. This environment also needs canonical knowledge—a dynamic knowledge base that can serve as a source-of-truth for AI agents and humans.
These are the base requirements of an agent runtime. A critical component to integration, an agent runtime is an environment for building, testing, deploying, and evolving AI agents.
- Runtimes maintain agent memory and goals across interactions
- Runtimes enables access to external tools like MCPs, APIs, and databases
- Runtimes allow multi-agent coordination
- Runtimes operates continuously in the background
And in terms of helping businesses use AI defensively, runtimes handle input/output across modalities like text and voice, so AI agents can spot bad actors and alert humans. In UX terms, it’s the backstage infrastructure that transforms your product’s assistant from a button-press chatbot into a collaborative, contextual, goal-oriented experience designed that can proactively protect organizations and their customers. However companies choose to frame it, there’s emerging risk in sitting back and waiting to see what will happen next with AI. It just might be the end of your company.