Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Get exclusive access to thought-provoking articles, bonus podcast content, and cutting-edge whitepapers. Become a member of the UX Magazine community today!

Home ›› Artificial Intelligence ›› Open vs Closed: a Critical Question for Designing and Building Experiences

Member-only story

Open vs Closed: a Critical Question for Designing and Building Experiences

by Josh Tyson
8 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

As we careen into the era of conversational AI and hyperautomation, closed systems create bad experiences that stifle innovation and opportunity

Maybe you’ve seen the meme knocking around the internet: a photo of British octogenarian David Latimer who bottled a handful of seeds in a glass carboy in 1960 and left it largely untouched for almost 50 years (uncorking it only once, in 1972, to add a little water). His 10-gallon garden created its own miniature ecosystem, and has thrived for more than half a century. [1]

In the realm of technology, closed platforms are like Latimer’s terrarium: they can be highly functional, beautiful, and awe inspiring, but they can only grow as big as their bottles. The current business landscape, however—with businesses attempting to sequence as many innovative technologies as possible, as quickly as possible, to automate business processes, workflows, tasks, and communications—requires an architecture that breaks out well beyond these glass walls.

For something like the original iPhone, a terrarium was just fine. Everything a user needed to enjoy its functionalities was baked right into the original version of iOS. Keeping the system closed ensured the quality of the apps and created a seamless overall experience, which contributed to its success, despite the fact that it didn’t have nearly as much functionality as other mobile devices at the time.

Become a member to read the whole content.

Become a member
post authorJosh Tyson

Josh Tyson
Josh Tyson is the co-author of the first bestselling book about conversational AI, Age of Invisible Machines. He is also the Director of Creative Content at OneReach.ai and co-host of both the Invisible Machines and N9K podcasts. His writing has appeared in numerous publications over the years, including Chicago Reader, Fast Company, FLAUNT, The New York Times, Observer, SLAP, Stop Smiling, Thrasher, and Westword. 

Tweet
Share
Post
Share
Email
Print

Related Articles

What if grieving your AI isn’t a sign of weakness, but proof it truly helped you grow? This article challenges how we think about emotional bonds with machines.

Article by Bernard Fitzgerald
Grieving the Mirror: Informed Attachment as a Measure of AI’s True Utility
  • The article explores how people can form meaningful and healthy emotional connections with AI when they understand what AI is and isn’t.
  • It introduces the Informed Grievability Test — a way to tell if an AI truly helped someone grow by seeing how they feel if they lose access to it.
  • The piece argues that grieving an AI can be a sign of real value, not weakness or confusion, and calls for more user education and less overly protective design that limits emotional depth in AI tools.
Share:Grieving the Mirror: Informed Attachment as a Measure of AI’s True Utility
7 min read

Who pays the real price for AI’s magic? Behind every smart response is a hidden human cost, and it’s time we saw the hands holding the mirror.

Article by Bernard Fitzgerald
The Price of the Mirror: When Silicon Valley Colonizes the Human Soul
  • The article reveals how AI’s human-like responses rely on the invisible labor of low-paid workers who train and moderate these systems.
  • It describes this hidden labor as a form of “cognitive colonialism,” where human judgment is extracted from the Global South for profit.
  • The piece criticizes the tech industry’s ethical posturing, showing how convenience for some is built on the suffering of others.
Share:The Price of the Mirror: When Silicon Valley Colonizes the Human Soul
7 min read

AI’s promise isn’t about more tools — it’s about orchestrating them with purpose. This article shows why random experiments fail, and how systematic design can turn chaos into ‘Organizational AGI.’

Article by Yves Binda
Random Acts of Intelligence
  • The article critiques the “hammer mentality” of using AI without a clear purpose.
  • It argues that real progress lies in orchestrating existing AI patterns, not chasing new tools.
  • The piece warns that communication complexity — the modern Tower of Babel — is AI’s biggest challenge.
  • It calls for outcome-driven, ethical design to move from random acts to “Organizational AGI.”
Share:Random Acts of Intelligence
5 min read

Join the UX Magazine community!

Stay informed with exclusive content on the intersection of UX, AI agents, and agentic automation—essential reading for future-focused professionals.

Hello!

You're officially a member of the UX Magazine Community.
We're excited to have you with us!

Thank you!

To begin viewing member content, please verify your email.

Tell us about you. Enroll in the course.

    This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and