Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Get exclusive access to thought-provoking articles, bonus podcast content, and cutting-edge whitepapers. Become a member of the UX Magazine community today!

Home ›› Artificial Intelligence ›› Using Conversational AI Platforms to Find the Real ROI

Member-only story

Using Conversational AI Platforms to Find the Real ROI

by UX Magazine Staff
5 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

As conversational AI reshapes business processes, major platforms like OneReach.ai and Cognigy are emerging as pivotal players in scalable implementations. Discover how these platforms orchestrate AI ecosystems, providing long-term value and transforming product design in ways that tools like ChatGPT alone cannot.

With Forrester releasing their new report on conversational AI for customer service, all the major analyst firms (including IDC and Gartner) have compiled similar reports on the top platforms many companies are using to create conversational AI product experiences for their employees and customers. This is happening in a moment where AI agents have come to dominate discussions about how companies can orchestrate business processes using conversational AI. 

While everyone has heard of OpenAI and Google, companies like OneReach.ai and Cognigy might not be well-known by those outside this marketplace. Still, they are among the platforms in Forrester’s report that highlight an often overlooked layer of the market that is critical to scalable implementations of generative AI. 

There is a disconnect between the short-term capabilities of generative tools and the much longer-term strategies that organizations need for actually leveraging the technologies associated with conversational AI. This wave of reports from major analyst groups seems to recognize this, as platforms for orchestrating these kinds of technology are designed to deliver the real business value that organizations are searching for.

Become a member to read the whole content.

Become a member
Tweet
Share
Post
Share
Email
Print

Related Articles

What if your AI didn’t just agree, but made you think harder? This piece explores why designing for pushback might be the key to smarter, more meaningful AI interactions.

Article by Charlie Gedeon
The Power of Designing for Pushback
  • The article argues that AI systems like ChatGPT are often too agreeable, missing opportunities to encourage deeper thinking.
  • It introduces the idea of “productive resistance,” where AI gently challenges users to reflect, especially in educational and high-stakes contexts.
  • The article urges designers to build AI that balances trust and pushback, helping users think critically rather than just feel validated.
Share:The Power of Designing for Pushback
6 min read

As UX research shifts and reshapes, how can researchers stay ahead? This article explores the changing landscape and how to thrive in it.

Article by James Lang
Hopeful Futures for UX Research
  • The article explores how UX research is evolving, with roles shifting and adjacent skills like creativity and knowledge management becoming more important.
  • It looks at how non-researchers are doing more research work, and how this trend challenges traditional UX research careers.
  • The piece argues that researchers can stay relevant by adapting, staying curious, and finding new ways to share their value.
Share:Hopeful Futures for UX Research
16 min read

Mashed potatoes as a lifestyle brand? When AI starts generating user personas for absurd products — and we start taking them seriously — it’s time to ask if we’ve all lost the plot. This sharp, irreverent critique exposes the real risks of using LLMs as synthetic users in UX research.

Article by Saul Wyner
Have SpudGun, Will Travel: How AI’s Agreeableness Risks Undermining UX Thinking
  • The article explores the growing use of AI-generated personas in UX research and why it’s often a shortcut with serious flaws.
  • It introduces critiques that LLMs are trained to mimic structure, not judgment. When researchers use AI as a stand-in for real users, they risk mistaking coherence for credibility and fantasy for data.
  • The piece argues that AI tools in UX should be assistants, not oracles. Trusting “synthetic users” or AI-conjured feedback risks replacing real insights with confident nonsense.
Share:Have SpudGun, Will Travel: How AI’s Agreeableness Risks Undermining UX Thinking
22 min read

Join the UX Magazine community!

Stay informed with exclusive content on the intersection of UX, AI agents, and agentic automation—essential reading for future-focused professionals.

Hello!

You're officially a member of the UX Magazine Community.
We're excited to have you with us!

Thank you!

To begin viewing member content, please verify your email.

Tell us about you. Enroll in the course.

    This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and