Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Get exclusive access to thought-provoking articles, bonus podcast content, and cutting-edge whitepapers. Become a member of the UX Magazine community today!

Home ›› Artificial Intelligence ›› Conversational AI ›› You Can Automate a 787 — You Can Automate a Company

You Can Automate a 787 — You Can Automate a Company

by Robb Wilson
8 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

If it’s possible to automate something as complex as a 787 Dreamliner, what’s stopping us from automating entire organizations? In this excerpt from the updated edition of Age of Invisible Machines, Robb Wilson explores how decades of experience in UX and conversational AI have led to a bold vision for the future of intelligent automation. From cockpit redesigns to the untapped potential of AI agents in enterprise, this piece connects the dots between user-centered design, multimodal interaction, and the rise of hyperautomation, showing how companies can build smarter, more intuitive systems that evolve as fast as the technology driving them.

To ensure that technology remains truly useful as its power grows exponentially, we need to keep a few basic questions at the center of our thinking. Who is this technology built for? What problems will the people it benefits need to solve and want solved by AI? How might they employ AI agent solutions to find a resolution?

I began asking these questions decades ago, while doing user-centered design work that eventually led to the founding of one of the world’s first UX agencies, Effective UI (now part of Ogilvy). Terms like user-centric and customer experience weren’t in the vernacular, but they were central to the work we did for clients. For one project, I was part of a cross-disciplinary team tasked with redesigning the cockpit of the 747 for the 787 Dreamliner. The Dreamliner was going to have a carbon fiber cockpit, which allowed for bigger windows, which left less space for buttons, and the Dreamliner was going to need more buttons than the button-saturated 747.

Our solution changed the way I thought about technology forever. We solved the button problem with large touchscreen panels that would show the relevant controls to the pilots based on the phase of the flight plan the plane was in. While there’s some truth to the idea that these planes do a lot of the flying automatically, the goal wasn’t to make the pilots less relevant, it was to give them a better experience with a lighter cognitive load. To fly the 747, pilots had to carry around massive manuals that provided step-by-step instructions for pressing buttons in sequence to execute specific functions during flight — manuals that there was barely room for in the crowded cockpits.

The experience of flying a commercial airplane became more intuitive because we were able to contextualize the pilot’s needs based on the flight plan data and provide a relevant interface. Context was the key to creating increasingly rewarding and personalized experiences. The other massive takeaway for me was that if you can automate a 787, you can automate a company.

Of all the experiences people have with technology, conversational ones are typically some of the worst, though thankfully, that’s changing. Creating a framework where conversational AI and AI agents can thrive, though insanely difficult work, creates unmatched potential.

As a technologist, builder, and designer, I’ve been deploying and researching conversational AI for more than two decades. Some of my early experiments with conversational AI came to be known as Sybil, a bot I built about 20 years ago with help from Daisy Weborg (my eventual co-founder of OneReach.ai). The internet was a less guarded space back then, and in some ways, it was easier to feed Sybil context. For example, Sybil could send spiders crawling over geo-tagged data in my accounts to figure out where I was at any given moment. Daisy loved the “where’s Robb” skill because I was often on the move in those days, and she could get a better sense of my availability for important meetings.

Recently, I had a conversation with Adam Cheyer, one of the co-creators of Siri. When I was working on Sybil, I wasn’t fully aware of the work Adam was doing at Siri Labs. Likewise, he wasn’t hip to what I was doing either. Interestingly, though perhaps unsurprisingly in retrospect, we were trying to solve many of the same problems.

Adam mentioned a functionality that was built into the first version of Siri that would allow you to be reading an email from someone and ask Siri to call that person. That might sound simple, but it’s a relatively complex task, even by today’s standard. In this example, Siri is connecting contact information from Mail with associated data in Contacts, connecting points between two separate apps to create a more seamless experience for users.

“At the time, email and contacts integration wasn’t very good,” Cheyer said on our podcast. “So you couldn’t even get to the contact easily from an email. You had to leave an app and search for it. And it was a big pain. “Call him.” It was a beautiful combination of manipulating what’s on the screen and asking for what’s not on the screen. For me, that’s the key to multimodal interaction.”

Adam went on to mention other functionalities that he assumed had been lost to the dustbin of history, including skills around discovery that he and Steve Jobs fought over. Apple acquired Siri in 2010, and the freestanding version of the app had something called semantic autocomplete. Adam explained that if you wanted to find a romantic comedy playing near you, typing the letters “R” and “O” into a text field might auto-complete to show rodeos, tea rooms, and romantic comedies. If you clicked “romantic comedy,” Siri would tell you which romantic comedies were showing near you, along with info about their casts and critical reviews. This feature never made it into the beta version of Siri that launched with the iPhone 4S in October 2011.

“I feel that because I lost that argument with Steve, we lost that in voice interfaces forever. I have never seen another voice assistant experience that had as good an experience as the original Siri. I feel it got lost to history. And discovery is an unsolved problem.”

I’m sharing these stories from Adam for two reasons. One, to remind you that there are people who have been working for decades on conversational AI. ChatGPT blew the doors open on this technology to the public, but for those of us who’ve been toiling on the inside for years, the response was something along the lines of, “Finally, people will believe me when I talk about how powerful this technology is!”

Another reason for sharing is that Adam’s experience with Steve Jobs illustrates that the choices we make now with this technology will set a trajectory that will become increasingly difficult to reset. With their ability to mine unstructured data (like written and recorded conversations), large language models (LLMs) have the power to solve the problem of discovery, but this is a problem that Adam and I have been circling for more than 20 years. Things might have been different if he’d won that argument with Jobs. 

You see, the ultimate goal isn’t that we can converse with machines, telling them every little thing we want them to do for us. The goal is for machines to be able to predict the things we want them to do for us before we even ask. The ultimate experience is not one where we talk to the machine, but one where we don’t need to, because it already knows us so well. We provide machines with objectives, but they don’t really need explicit instructions unless we want something done in a very specific way.

Siri’s popularity, along with the widespread adoption of smart speakers and Amazon’s Alexa, made something else clear to me. Talking to speakers in your house can be fun, but there’s really only so much intrinsic value in an automated home. Home is generally a place for relaxation, not productivity. Being able to walk into your office and engage in conversation with technology that’s running a growing collection of business process automations is where the real wealth of opportunity lies. Orgs are going to want their own proprietary versions of Alexa or Siri in different flavors. Intelligent virtual assistants that are finely tuned to meet an organization’s security and privacy needs. Still, coming up on ten years after the introduction of Alexa, there’s still no version of that within a business.

Due to the inherently complex nature of the tasks, the lack of maturity in the tools, and the difficulty in finding truly experienced people to build and run them, creating better-than-human experiences is extremely difficult to do. I once heard someone at Gartner call it “insanely hard.” Over the years, I’ve watched many successful and failed implementations (including some of our own crash-and-burn attempts). Automating chatbots on websites, phone, SMS, WhatsApp, Slack, Alexa, Google Home, and other platforms, patterns began to emerge from successful projects. We began studying those success stories to see how they compared to others.

My team gathered data and best practices over the course of more than 2 million hours of testing with over 30 million people participating in workflows across 10,000+ conversational applications (including over 500,000 hours of development). I’ve formulated an intimate understanding of what it takes to build and manage intelligent networks of applications and, more importantly, how to manage an ecosystem of applications that enables any organization to hyperautomate.

For most companies, ChatGPT has been a knock upside the head, waking them up to the fact that they’re already in the race toward hyperautomation or organizational artificial general intelligence (AGI). As powerful as GPT and other LLMs are, they are just one piece of an intelligent technology ecosystem. Just like a website needs a content strategy to avoid becoming a collection of disorganized pages, achieving hyperautomation requires a sound strategy for building an intelligent ecosystem and the willingness to quickly embrace new technology.

We’ve seen how disruptive this technology can be, but leveraged properly, generative AI, conversational interfaces, AI agents, code-free design, RPA, and machine learning are something more powerful: they are force multipliers that can make companies that use them correctly impossible to compete with. The scope and implications of these converging technologies can easily induce future shock — the psychological state experienced by individuals or society at large when perceiving too much change in too short a period of time. That feeling of being overwhelmed might happen many times when reading this book. Organizations currently wrestling with their response to ChatGPT — that are employing machines, conversational applications, or AI-powered digital workers in an ecosystem that isn’t high functioning—are likely experiencing some form of this.

The goal for this book is to alleviate future shock by equipping problem solvers with a strategy for building an intelligent, coordinated ecosystem of automation — a network of skills shared between intelligent digital workers that will have a widespread impact within an organization. Following this strategy will not only vastly improve your existing operations, but it will also forge a technology ecosystem that immediately levels up every time there’s a breakthrough in LLMs or some other tool. An ecosystem built for organizational AI can take advantage of new technologies the minute they drop.

It took me 20 years to develop the best practices and insights collected here. I’ve been fortunate to have had countless conversations about how conversational AI fits into the enterprise landscape with headstrong business leaders. I’ve seen firsthand how a truly holistic understanding of the technologies associated with conversational AI can make the crucial difference for enterprise companies struggling to balance the problems that come with this fraught territory. That balance will only come about when the people working with it have a strategy that can put converging technologies to work in intelligent ways, propelling organizations and, more broadly, the people of the world, into a bold new future.

This article was excerpted from Chapter 6 of the forthcoming revised and updated second edition of Age of Invisible Machines, the first bestselling book about conversational AI (Wiley, Apr 22, 2025).

Featured image courtesy: by north.

post authorRobb Wilson

Robb Wilson

Robb Wilson is the CEO and co-founder of OneReach.ai, a leading conversational AI platform powering over 1 billion conversations per year. He also co-authored The Wall Street Journal bestselling business book, Age of Invisible Machines. An experience design pioneer with over 20 years of experience working with artificial intelligence, Robb lives with his family in Berkeley, Calif.

Tweet
Share
Post
Share
Email
Print
Ideas In Brief
  • The article explores how automating a plane cockpit led to deeper insights about business automation.
  • It shows how conversational AI and agent-based systems can reduce cognitive load and improve decision-making.
  • It argues that organizations need intelligent ecosystems — not just tools like ChatGPT — to thrive in the age of automation.

Related Articles

Unlock the secret to truly innovative UX by looking beyond the screen. This article reveals how inspiration from architecture, nature, and physical design can elevate your digital creations, making them more intuitive, user-centered, and creatively inspired. Step outside the digital world to spark new ideas and transform your UX design process.

Article by Rodolpho Henrique
The Secret to Innovative UX: Look Beyond the Digital World
  • The article explores how UX designers can draw inspiration from the analog world, including architecture, nature, and physical product design, to innovate digital experiences.
  • It highlights key design principles such as ergonomics, affordances, and wayfinding that can enhance digital interfaces.
  • The piece emphasizes the importance of stepping beyond the screen to foster creativity, prevent burnout, and create user-centered designs that feel natural and intuitive.
Share:The Secret to Innovative UX: Look Beyond the Digital World
5 min read

What if your brain could merge with a computer? BCIs are revolutionizing healing, learning, and thinking — but with risks like privacy threats and loss of autonomy. Explore the future of merged consciousness and how to harness it wisely.

Article by Oliver Inderwildi
Navigating the Convergence of Mind & Machine: On the Neural Frontier & the Implications of Merged Consciousness
  • The article explores how brain-computer interfaces (BCIs) are pushing the neural frontier, enabling breakthroughs in treating neurological disorders, enhancing human, cognition, and ultimately increasing our understanding of the brain’s functioning.
  • The piece defines the concept of merged consciousness and discusses its ethical and societal risks, including loss of autonomy, data privacy concerns, and potential socioeconomic divides.
  • It highlights the role of neuroplasticity in human-computer interaction, showing how feedback loops from technology accelerate learning and adaptation.
  • It calls for innovative policymaking to balance rapid technological advancements with safeguards, ensuring BCIs benefit humanity without compromising our future
Share:Navigating the Convergence of Mind & Machine: On the Neural Frontier & the Implications of Merged Consciousness
16 min read

Are we on the brink of an AI-first revolution? As more products are built entirely around AI engines, designers must adapt. From dynamic interfaces and non-linear journeys to helping users optimize prompts, discover how the next generation of AI-driven products will reshape UX design.

Article by Tom Rowson
AI-First: Designing the Next Generation of AI Products
  • The article introduces “AI-first” products, designed around AI engines to offer more than just chat interfaces and improve over time.
  • It highlights key challenges for designers: creating flexible interfaces, helping users with prompts, and managing AI errors like hallucinations.
  • The piece stresses the need to adapt to non-linear, iterative user journeys as AI-first apps evolve.
Share:AI-First: Designing the Next Generation of AI Products
4 min read

Join the UX Magazine community!

Stay informed with exclusive content on the intersection of UX, AI agents, and agentic automation—essential reading for future-focused professionals.

Hello!

You're officially a member of the UX Magazine Community.
We're excited to have you with us!

Thank you!

To begin viewing member content, please verify your email.

Tell us about you. Enroll in the course.

    This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and