Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Get exclusive access to thought-provoking articles, bonus podcast content, and cutting-edge whitepapers. Become a member of the UX Magazine community today!

Home ›› How Human Should AI Agents Really Be?

How Human Should AI Agents Really Be?

by Josh Tyson
1 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

When it comes to design elements for AI agents, few considerations are as weighty and convoluted as anthropomorphization. As we head into the era of conversational machines, renowned science reporter Sophie Bushwick joins Robb and Josh for further explorations of the human-like properties we assign to AI. Sophie’s work reminded us that our introduction to pocket computers came with the heavily anthropomorphized Tamagotchi pets of the late ’90s. In this rousing conversation, she helps weigh the pros and cons of making them human-like across a whole range of scenarios, including those geared toward productivity and entertainment.

We’re already prone to assigning human-like qualities to the many tools and pets in our lives, and with conversational technologies primed to become frequent partners in our daily activities, it’s critical to consider just how human we should make them seem. The correlations and similarities between corporations and AI pose both ethical considerations as well as design challenges, and this conversation draws on Sophie’s extensive background in technology reporting to look for answers.

Currently the Senior News Editor at New Scientist, Sophie has more than a decade of experience covering technology online and in print, with work appearing in places like Discover Magazine, Scientific American, Popular Science, and Gizmodo. She’s also produced numerous podcasts and videos, has made regular TV appearances on “CBS This Morning” and MSNBC. You can listen to her regular appearances on NPR’s Science Friday with Ira Flatow.

post authorJosh Tyson

Josh Tyson
Josh Tyson is the co-author of the first bestselling book about conversational AI, Age of Invisible Machines. He is also the Director of Creative Content at OneReach.ai and co-host of both the Invisible Machines and N9K podcasts. His writing has appeared in numerous publications over the years, including Chicago Reader, Fast Company, FLAUNT, The New York Times, Observer, SLAP, Stop Smiling, Thrasher, and Westword. 

Tweet
Share
Post
Share
Email
Print

Related Articles

What if your AI didn’t just agree, but made you think harder? This piece explores why designing for pushback might be the key to smarter, more meaningful AI interactions.

Article by Charlie Gedeon
The Power of Designing for Pushback
  • The article argues that AI systems like ChatGPT are often too agreeable, missing opportunities to encourage deeper thinking.
  • It introduces the idea of “productive resistance,” where AI gently challenges users to reflect, especially in educational and high-stakes contexts.
  • The article urges designers to build AI that balances trust and pushback, helping users think critically rather than just feel validated.
Share:The Power of Designing for Pushback
6 min read

As UX research shifts and reshapes, how can researchers stay ahead? This article explores the changing landscape and how to thrive in it.

Article by James Lang
Hopeful Futures for UX Research
  • The article explores how UX research is evolving, with roles shifting and adjacent skills like creativity and knowledge management becoming more important.
  • It looks at how non-researchers are doing more research work, and how this trend challenges traditional UX research careers.
  • The piece argues that researchers can stay relevant by adapting, staying curious, and finding new ways to share their value.
Share:Hopeful Futures for UX Research
16 min read

Mashed potatoes as a lifestyle brand? When AI starts generating user personas for absurd products — and we start taking them seriously — it’s time to ask if we’ve all lost the plot. This sharp, irreverent critique exposes the real risks of using LLMs as synthetic users in UX research.

Article by Saul Wyner
Have SpudGun, Will Travel: How AI’s Agreeableness Risks Undermining UX Thinking
  • The article explores the growing use of AI-generated personas in UX research and why it’s often a shortcut with serious flaws.
  • It introduces critiques that LLMs are trained to mimic structure, not judgment. When researchers use AI as a stand-in for real users, they risk mistaking coherence for credibility and fantasy for data.
  • The piece argues that AI tools in UX should be assistants, not oracles. Trusting “synthetic users” or AI-conjured feedback risks replacing real insights with confident nonsense.
Share:Have SpudGun, Will Travel: How AI’s Agreeableness Risks Undermining UX Thinking
22 min read

Join the UX Magazine community!

Stay informed with exclusive content on the intersection of UX, AI agents, and agentic automation—essential reading for future-focused professionals.

Hello!

You're officially a member of the UX Magazine Community.
We're excited to have you with us!

Thank you!

To begin viewing member content, please verify your email.

Tell us about you. Enroll in the course.

    This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and