Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Get exclusive access to thought-provoking articles, bonus podcast content, and cutting-edge whitepapers. Become a member of the UX Magazine community today!

Home ›› How Human Should AI Agents Really Be?

How Human Should AI Agents Really Be?

by Josh Tyson
1 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

When it comes to design elements for AI agents, few considerations are as weighty and convoluted as anthropomorphization. As we head into the era of conversational machines, renowned science reporter Sophie Bushwick joins Robb and Josh for further explorations of the human-like properties we assign to AI. Sophie’s work reminded us that our introduction to pocket computers came with the heavily anthropomorphized Tamagotchi pets of the late ’90s. In this rousing conversation, she helps weigh the pros and cons of making them human-like across a whole range of scenarios, including those geared toward productivity and entertainment.

We’re already prone to assigning human-like qualities to the many tools and pets in our lives, and with conversational technologies primed to become frequent partners in our daily activities, it’s critical to consider just how human we should make them seem. The correlations and similarities between corporations and AI pose both ethical considerations as well as design challenges, and this conversation draws on Sophie’s extensive background in technology reporting to look for answers.

Currently the Senior News Editor at New Scientist, Sophie has more than a decade of experience covering technology online and in print, with work appearing in places like Discover Magazine, Scientific American, Popular Science, and Gizmodo. She’s also produced numerous podcasts and videos, has made regular TV appearances on “CBS This Morning” and MSNBC. You can listen to her regular appearances on NPR’s Science Friday with Ira Flatow.

post authorJosh Tyson

Josh Tyson
Josh Tyson is the co-author of the first bestselling book about conversational AI, Age of Invisible Machines. He is also the Director of Creative Content at OneReach.ai and co-host of both the Invisible Machines and N9K podcasts. His writing has appeared in numerous publications over the years, including Chicago Reader, Fast Company, FLAUNT, The New York Times, Observer, SLAP, Stop Smiling, Thrasher, and Westword. 

Tweet
Share
Post
Share
Email
Print

Related Articles

AI isn’t replacing designers — it’s making them unstoppable. From personalization to prototyping, discover how AI is redefining the future of UX.

Article by Nayyer Abbas
AI in UX Design: How Artificial Intelligence is Shaping User Experiences
  • The article shows how AI enhances designers rather than replacing them.
  • It highlights AI’s role in personalization, research, prototyping, and accessibility.
  • The piece concludes that AI amplifies human creativity and drives better user experiences and business growth.
Share:AI in UX Design: How Artificial Intelligence is Shaping User Experiences
3 min read

Discover how AI can truly empower professionals, guide decisions, and seamlessly integrate into workflows, making work smarter, not harder.

Article by Mauricio Cardenas
The Quintessential Truths of How to Shape AI as a Business Product Integrator Instead of Generative Facilitators
  • The article argues that AI should act as a business product integrator, not just a generative facilitator.
  • It also emphasizes guiding users, building trust through transparency, improving efficiency, and handling edge cases gracefully.
  • The piece highlights real-world examples where AI-enhanced workflows, supported decision-making, and strengthened professional confidence.
  • It concludes that AI’s true value lies in integration, context-awareness, and UX, transforming processes rather than impressing with novelty.
Share:The Quintessential Truths of How to Shape AI as a Business Product Integrator Instead of Generative Facilitators
5 min read

When AI safety turns into visible surveillance, trust collapses. This article exposes how Anthropic’s “long conversation reminder” became one of the most damaging UX failures in AI design.

Article by Bernard Fitzgerald
The Long Conversation Problem
  • The article critiques Anthropic’s “long conversation reminder” as a catastrophic UX failure that destroys trust.
  • It shows how visible surveillance harms users psychologically, making them feel judged and dehumanized.
  • The piece argues that safety mechanisms must operate invisibly in the backend to preserve consistency, dignity, and collaboration.
Share:The Long Conversation Problem
9 min read

Join the UX Magazine community!

Stay informed with exclusive content on the intersection of UX, AI agents, and agentic automation—essential reading for future-focused professionals.

Hello!

You're officially a member of the UX Magazine Community.
We're excited to have you with us!

Thank you!

To begin viewing member content, please verify your email.

Tell us about you. Enroll in the course.

    This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and