Have SpudGun, Will Travel: How AI’s Agreeableness Risks Undermining UX Thinking
- The article explores the growing use of AI-generated personas in UX research and why it’s often a shortcut with serious flaws.
- It introduces critiques that LLMs are trained to mimic structure, not judgment. When researchers use AI as a stand-in for real users, they risk mistaking coherence for credibility and fantasy for data.
- The piece argues that AI tools in UX should be assistants, not oracles. Trusting “synthetic users” or AI-conjured feedback risks replacing real insights with confident nonsense.
Share:Have SpudGun, Will Travel: How AI’s Agreeableness Risks Undermining UX Thinking
Share this link
- June 17, 2025
22 min read