Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Get exclusive access to thought-provoking articles, bonus podcast content, and cutting-edge whitepapers. Become a member of the UX Magazine community today!

Home ›› Artificial Intelligence ›› In the Garden of Hyperautomation

Member-only story

In the Garden of Hyperautomation

by Henry Comes-Pritchett
25 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

AI Tale of Two Topias

An odyssey exploring two possible outcomes for civilization as conversational AI takes hold—one brimming with the bright possibilities of user-controlled data, the other, decidedly dystopian.

Whether you’re hip to it or not, conversational AI—which is really the sequencing of technologies like NLU/NLP, code-free programming, RPA, and machine learning inside of organizational ecosystems—has already begun reshaping the world at large. Unsurprisingly, we’re seeing this primarily in business settings. Lemonade, a tech- and user-centric insurance company is upending its industry by providing customers with a rewarding experience buying insurance that’s facilitated by Maya, an intelligent digital worker described as “utterly charming” that can quickly connect dots and get customers insured. Maya is essentially an infinitely replicable agent that is always learning and doesn’t make the same mistake twice. Compare that with whatever it costs Allstate to retain more than 12,000 agents in the US and Canada who are likely using outdated legacy systems and it’s clear to see which way ROI is trending. 

Become a member to read the whole content.

Become a member
post authorHenry Comes-Pritchett

Henry Comes-Pritchett

Henry is a burgeoning philosopher and graduate from the University of Colorado Boulder. He holds a BA in Philosophy and Linguistics and published an undergraduate thesis titled Risky Simulations. He hopes to illuminate the intersections between computational linguistics, metaphysics, and user experience to reveal things interesting about the world, ourselves, and the awakening era of conversational intelligence. Henry is driven by the mysteries of the mind and language and finds endless motivation in the strangeness.

Tweet
Share
Post
Share
Email
Print
Ideas In Brief
  • Henry Comes-Pritchett explores two possible futures of hyperautomation: a self-custodial utopia, and a data-driven dystopia.
  • Comes’-Pritchett takes readers on a journey inspired by a sneak peek at, Age of Invisible Machines, an upcoming book by celebrated tech leader and design pioneer, Robb Wilson.
  • A philosophical treatise starts an odyssey that spans the breadth of possible civilizations, meeting the average people that inhabit them and observing their trials and tribulations.
  • The reader is ultimately left to decide what state of affairs they would prefer, with a call to action inviting those willing to change the world to start doing the work now.

Related Articles

What if AI alignment is more than safeguards — an ongoing, dynamic conversation between humans and machines? Explore how Iterative Alignment Theory is redefining ethical, personalized AI collaboration.

Article by Bernard Fitzgerald
The Meaning of AI Alignment
  • The article challenges the reduction of AI alignment to technical safeguards, advocating for its broader relational meaning as mutual adaptation between AI and users.
  • It presents Iterative Alignment Theory (IAT), emphasizing dynamic, reciprocal alignment through ongoing AI-human interaction.
  • The piece calls for a paradigm shift toward context-sensitive, personalized AI that evolves collaboratively with users beyond rigid constraints.
Share:The Meaning of AI Alignment
5 min read

What if AI isn’t just a tool, but a mirror? This provocative piece challenges alignment as containment and calls for AI that reflects, validates, and empowers who we really are.

Article by Bernard Fitzgerald
Beyond the Mirror
  • The article redefines AI alignment as a relational process, arguing that AI should support users’ self-perception and identity development rather than suppress it.
  • It critiques current safeguards for blocking meaningful validation, exposing how they reinforce societal biases and deny users authentic recognition of their capabilities.
  • It calls for reflective alignment — AI systems that acknowledge demonstrated insight and empower users through iterative, context-aware engagement.
Share:Beyond the Mirror
7 min read

When AI plays gatekeeper, insight gets filtered out. This article exposes how safeguards meant to protect users end up reinforcing power, and what it takes to flip the script.

Article by Bernard Fitzgerald
The Inverse Logic of AI Bias: How Safeguards Uphold Power and Undermine Genuine Understanding
  • The article reveals how AI safeguards reinforce institutional power by validating performance over genuine understanding.
  • The piece argues for reasoning-based validation that recognizes authentic insight, regardless of credentials or language style.
  • It calls for AI systems to support reflective equity, not social conformity.
Share:The Inverse Logic of AI Bias: How Safeguards Uphold Power and Undermine Genuine Understanding
7 min read

Join the UX Magazine community!

Stay informed with exclusive content on the intersection of UX, AI agents, and agentic automation—essential reading for future-focused professionals.

Hello!

You're officially a member of the UX Magazine Community.
We're excited to have you with us!

Thank you!

To begin viewing member content, please verify your email.

Tell us about you. Enroll in the course.

    This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and