In an ecosystem built for the orchestration of LLMs, AI agents, and other generative tools, conversation is the tissue that connects all the individual nodes at play. A collection of advanced technologies is sequenced in perpetually intelligent ways to create automations of business processes that continue getting smarter. In these ecosystems, machines are communicating with other machines, but there are also conversations between humans and machines. Inside truly optimized ecosystems, humans are training their digital counterparts to complete new tasks through conversational interfaces — they’re telling them how to contextualize and solve problems.
These innovations, algorithms, and systems that get sewn together start to build what’s referred to as artificial general intelligence (AGI). Building on the idea of providing machines a balance of objectives and instructions, and a sort of system that’s achieved AGI, will only need an objective in order to complete a task. This leads to the more imminent organizational AGI we’ve been talking so much about. Josh wrote about this connection in an article last year for Observer:
There’s the immediate and tangible benefit of people eliminating tedious tasks from their lives. Then there’s the long term benefit of a burgeoning ecosystem where employees and customers are interacting with digital teammates that can perform automations leveraging all forms of data across an organization. This is an ecosystem that starts to take the form of a digital twin.
McKinsey describes a digital twin as “a virtual replica of a physical object, person, or process that can be used to simulate its behavior to better understand how it works in real life.” They describe these twins inhabiting ecosystems similar to what we’re describing here, that they call an “enterprise metaverse … a digital and often immersive environment that replicates and connects every aspect of an organization to optimize simulations, scenario planning, and decision making.”
Something as vast as an enterprise metaverse won’t materialize inside a closed system where the tools have to be supplied exclusively by Google or IBM. If you’re handcuffed to a specific LLM, NLP, or NLU vendor, your development cycles will be limited by their schedule and capabilities. This is actually a common misstep for organizations looking for vendors: it’s easy to think that the processing and contextualization of natural language is artificial intelligence — a faulty notion that ChatGPT in particular set ablaze. But LLMs and NLP/NLU are just individual pieces of technology that make up a much broader ecosystem for creating artificial intelligence. Perhaps more importantly, in terms of keeping an open system, LLMs and NLP/NLU are one of many modular technologies that can be orchestrated within an ecosystem. “Modular” means that, when better functionalities — like improved LLMs — emerge, an open system is ready to accept and use them.
LLMs, a common stumbling block
In the rush to begin hyperautomating, LLMS have quickly proven to be the first stumbling block for many organizations. As they attempt to automate specific aspects of their operations with these tools that seem to know so much (but actually “know” basically nothing), the result is usually a smattering of less-than-impressive chatbots that are likely unreliable and operating in their own closed system. These cloistered AI agents are unable to become part of an orchestrated effort and thus create subpar user experiences.
Think of auto manufacturing. In some ways, it would be easier to manage the supply chain if everything came from one supplier or if the manufacturer supplied its own parts, but production would suffer. Ford — a pioneer of assembly-line efficiency — relies on a supply chain with over 1,400 tier 1 suppliers separated by up to 10 tiers between supply and raw materials, providing significant opportunities to identify and reduce costs and protect against economic shifts. This represents a viable philosophy where hyperautomation is concerned as well. Naturally, it comes with a far more complex set of variables, but relying on one tool or vendor stifles nearly every aspect of the process: innovation, design, user experience — it all suffers.
Strive for openness
“Most of the high-profile successes of AI so far have been in relatively closed sorts of domains,” Dr. Ben Goertzel said in his TEDxBerkeley talk, “Decentralized AI,” pointing to game playing as an example. He describes AI programs playing chess better than any human but reminds us that these applications still “choke a bit when you give them the full chaotic splendor of the everyday world that we live in.” Goertzel has been working in this frontier for years through the OpenCog Foundation, the Artificial General Intelligence Society, and SingularityNET, a decentralized AI platform which lets multiple AI agents cooperate to solve problems in a participatory way without any central controller.
In that same TEDx talk, Goertzel references ideas from Marvin Minsky’s book The Society of Mind: “It may not be one algorithm written by one programmer or one company that gives the breakthrough to general intelligence. …It may be a network of different AIs, each doing different things, specializing in certain kinds of problems.”
Hyperautomating within an organization is much the same: a whole network of elements working together in an evolutionary fashion. As the architects of the ecosystem are able to iterate rapidly, trying out new configurations, the fittest tools, AIs, and algorithms survive. From a business standpoint, these open systems provide the means to understand, analyze, and manage the relationships between all of the moving parts inside your burgeoning ecosystem, which is the only way to craft a feasible strategy for achieving hyperautomation.
Don’t fear the scope, embrace the enormity
Creating an architecture for hyperautomation is a matter of creating an infrastructure, not so much the individual elements that exist within an infrastructure. It’s the roads, electricity, and waterways that you put in place to support houses and buildings, and communities. That’s the problem a lot of organizations have with these efforts. They’re failing to see how vast it is. Simulating human beings and automating tasks are not the same as buying an email marketing tool.
The beauty of an open platform is that you don’t have to get it right. It might be frightening in some regards to step outside a neatly bottled or more familiar ecosystem, but the breadth and complexity of AI are also where its problem-solving powers reside. Following practical wisdom applied to emergent technologies — wait until a clear path forward emerges before buying in — won’t work because once one organization achieves a state of hyperautomation, their competitors won’t be able to catch them. By choosing one flavor or system for all of your conversational AI needs, you’re limiting yourself at a time when you need as many tools as you can get. The only way to know what tools to use is to try them all, and with a truly open system, you have the power to do that.
As you can imagine, this distributed development and deployment of microservices gives your entire organization a massive boost. You can also create multiple applications/skills concurrently, meaning more developers working on the same app, at the same time, resulting in less time spent in development. All of this activity thrives because the open system allows new tools from any vendor to be sequenced at will.
This article was excerpted from Chapter 11 of the forthcoming revised and updated second edition of Age of Invisible Machines, the first bestselling book about conversational AI (Wiley, Apr 22, 2025).
Featured image courtesy: by north.