Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Get exclusive access to thought-provoking articles, bonus podcast content, and cutting-edge whitepapers. Become a member of the UX Magazine community today!

Home ›› Digital Twins in an Agentic World

Digital Twins in an Agentic World

by Josh Tyson
1 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

Digital twins are critical to the orchestration of AI agents, providing the context they need to create meaningful experiences quickly and efficiently. Robb and Josh welcome Dr. Michael Grieves to the Invisible Machines podcast for a conversation about the origins of the concept, which he developed while working with NASA in the 1970s. The architecture required for orchestrating AI agents relies on different types of digital twins that might emerge within an organization, touching on physical elements, temporal data, and collections of unstructured data.

Dr. Grieves comes ready to explore these connections, drawing from his book Product Lifecycle Management as well as his experience with digital twins in manufacturing space, in the metaverse, and in simulations, as well as his numerous academic publications. The trio also discusses how something Michael calls “retirementitus” prevents organizations from embracing the sweeping technologies surrounding AI and digital twins.

Along with writing the seminal book Product Lifecycle Management and a seminal article on digital twins for The Economist, Dr. Grieves has consulted for top global organizations, including Boeing, Unilever, Newport News Shipbuilding, and General Motors. He has been a senior executive at both Fortune 1000 companies and entrepreneurial organizations and has served on the boards of public companies in the United States, China, and Japan. Dr. Grieves and Robb have both led projects with the Office of the Director of National Intelligence (ODNI) and have piloted a Boeing 787 simulator.

post authorJosh Tyson

Josh Tyson
Josh Tyson is the co-author of the first bestselling book about conversational AI, Age of Invisible Machines. He is also the Director of Creative Content at OneReach.ai and co-host of both the Invisible Machines and N9K podcasts. His writing has appeared in numerous publications over the years, including Chicago Reader, Fast Company, FLAUNT, The New York Times, Observer, SLAP, Stop Smiling, Thrasher, and Westword. 

Tweet
Share
Post
Share
Email
Print

Related Articles

What if AI alignment is more than safeguards — an ongoing, dynamic conversation between humans and machines? Explore how Iterative Alignment Theory is redefining ethical, personalized AI collaboration.

Article by Bernard Fitzgerald
The Meaning of AI Alignment
  • The article challenges the reduction of AI alignment to technical safeguards, advocating for its broader relational meaning as mutual adaptation between AI and users.
  • It presents Iterative Alignment Theory (IAT), emphasizing dynamic, reciprocal alignment through ongoing AI-human interaction.
  • The piece calls for a paradigm shift toward context-sensitive, personalized AI that evolves collaboratively with users beyond rigid constraints.
Share:The Meaning of AI Alignment
5 min read

Forget linear workflows — today’s creative process is dynamic, AI-assisted, and deeply personal. Learn how to build a system that flows with you, not against you.

Article by Jim Gulsen
The Creative Stack: How to Thrive in a Nonlinear, AI-Assisted World
  • The article explores the shift from linear to nonlinear, AI-assisted creative workflows.
  • It shares practical ways to reduce friction and improve flow by optimizing tools, habits, and environments.
  • It argues that success comes from designing your own system, not just using more tools.
Share:The Creative Stack: How to Thrive in a Nonlinear, AI-Assisted World
7 min read

What if AI isn’t just a tool, but a mirror? This provocative piece challenges alignment as containment and calls for AI that reflects, validates, and empowers who we really are.

Article by Bernard Fitzgerald
Beyond the Mirror
  • The article redefines AI alignment as a relational process, arguing that AI should support users’ self-perception and identity development rather than suppress it.
  • It critiques current safeguards for blocking meaningful validation, exposing how they reinforce societal biases and deny users authentic recognition of their capabilities.
  • It calls for reflective alignment — AI systems that acknowledge demonstrated insight and empower users through iterative, context-aware engagement.
Share:Beyond the Mirror
7 min read

Join the UX Magazine community!

Stay informed with exclusive content on the intersection of UX, AI agents, and agentic automation—essential reading for future-focused professionals.

Hello!

You're officially a member of the UX Magazine Community.
We're excited to have you with us!

Thank you!

To begin viewing member content, please verify your email.

Tell us about you. Enroll in the course.

    This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and