Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Get exclusive access to thought-provoking articles, bonus podcast content, and cutting-edge whitepapers. Become a member of the UX Magazine community today!

Home ›› Ephemeral Applications and AGI

Ephemeral Applications and AGI

by Josh Tyson
1 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

Expanding on the ideas explored in the previous episode, “Organizational AGI,” Robb and Josh welcome returning guest Ben Goertzel for a lively conversation about the future of software. Perhaps the most critical aspect of unlocking AGI, or singularity, as it’s also called, is fostering AI that can write code. This is much bigger than LLMs spitting out Python code, or building HTML websites, it points to single-use software that’s written on the fly, by machines, to meet people’s individual needs. These ephemeral applications will have a massive impact on the emergence of artificial general intelligence.

The founder and CEO of SingularityNET and someone who helped popularize the term singularity, Ben succinctly points out, “AI that can write code, this is the key to the singularity.”

You ask a machine to complete a task for you and it writes a program that gets the job done. This is a future that might be upon us sooner than we think. You’ll hear Ben assert that we have the technology we need to create AI that can write new software, which touches on the idea of organizational AGI (or OAGI). For bold organizations, the opportunity exists to chart a course toward OAGI that drastically elevates experiences for customers and employees. 

This episode has all sorts of food for thought as well as some exciting and actionable ideas that can have an immediate impact.

post authorJosh Tyson

Josh Tyson
Josh Tyson is the co-author of the first bestselling book about conversational AI, Age of Invisible Machines. He is also the Director of Creative Content at OneReach.ai and co-host of both the Invisible Machines and N9K podcasts. His writing has appeared in numerous publications over the years, including Chicago Reader, Fast Company, FLAUNT, The New York Times, Observer, SLAP, Stop Smiling, Thrasher, and Westword. 

Tweet
Share
Post
Share
Email
Print

Related Articles

What if AI alignment is more than safeguards — an ongoing, dynamic conversation between humans and machines? Explore how Iterative Alignment Theory is redefining ethical, personalized AI collaboration.

Article by Bernard Fitzgerald
The Meaning of AI Alignment
  • The article challenges the reduction of AI alignment to technical safeguards, advocating for its broader relational meaning as mutual adaptation between AI and users.
  • It presents Iterative Alignment Theory (IAT), emphasizing dynamic, reciprocal alignment through ongoing AI-human interaction.
  • The piece calls for a paradigm shift toward context-sensitive, personalized AI that evolves collaboratively with users beyond rigid constraints.
Share:The Meaning of AI Alignment
5 min read

Forget linear workflows — today’s creative process is dynamic, AI-assisted, and deeply personal. Learn how to build a system that flows with you, not against you.

Article by Jim Gulsen
The Creative Stack: How to Thrive in a Nonlinear, AI-Assisted World
  • The article explores the shift from linear to nonlinear, AI-assisted creative workflows.
  • It shares practical ways to reduce friction and improve flow by optimizing tools, habits, and environments.
  • It argues that success comes from designing your own system, not just using more tools.
Share:The Creative Stack: How to Thrive in a Nonlinear, AI-Assisted World
7 min read

What if AI isn’t just a tool, but a mirror? This provocative piece challenges alignment as containment and calls for AI that reflects, validates, and empowers who we really are.

Article by Bernard Fitzgerald
Beyond the Mirror
  • The article redefines AI alignment as a relational process, arguing that AI should support users’ self-perception and identity development rather than suppress it.
  • It critiques current safeguards for blocking meaningful validation, exposing how they reinforce societal biases and deny users authentic recognition of their capabilities.
  • It calls for reflective alignment — AI systems that acknowledge demonstrated insight and empower users through iterative, context-aware engagement.
Share:Beyond the Mirror
7 min read

Join the UX Magazine community!

Stay informed with exclusive content on the intersection of UX, AI agents, and agentic automation—essential reading for future-focused professionals.

Hello!

You're officially a member of the UX Magazine Community.
We're excited to have you with us!

Thank you!

To begin viewing member content, please verify your email.

Tell us about you. Enroll in the course.

    This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and