Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Get exclusive access to thought-provoking articles, bonus podcast content, and cutting-edge whitepapers. Become a member of the UX Magazine community today!

Home ›› Building a Journaling Bot with Academics Daniel Lametti and Joanna Kuc

Building a Journaling Bot with Academics Daniel Lametti and Joanna Kuc

by Josh Tyson
1 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

Daniel Lametti and Joanna Kuc join Robb and Josh to share the advanced chatbot they created for a mental health research project. Daniel is the Associate Professor of Cognitive Psychology, Acadia University and was instrumental in launching OneReach.ai’s Academic Research Fellowship program. Joanna is a PhD Candidate in Experimental Psychology at University College London and her work focuses on decoding language biomarkers relating to mental health. Using the OneReach.ai platform along with Telegram they created a bot that collected journal entries from participants as either written text of voice notes. Along with reminding people to make their daily entries, this bot also provided some participant groups with weekly summaries of their entries.

Discover the role conversational AI can play in mental health, how the experience journaling can be enhanced by large language models, and how a flexible approach to building conversational experiences can reach users where they are. Learn more about the design of this experience and some of their findings in this engaging and practical episode of Invisible Machines. We recommend watching this episode on our YouTube channel to see the demos in action

Check out the episode here.

post authorJosh Tyson

Josh Tyson
Josh Tyson is the co-author of the first bestselling book about conversational AI, Age of Invisible Machines. He is also the Director of Creative Content at OneReach.ai and co-host of both the Invisible Machines and N9K podcasts. His writing has appeared in numerous publications over the years, including Chicago Reader, Fast Company, FLAUNT, The New York Times, Observer, SLAP, Stop Smiling, Thrasher, and Westword. 

Tweet
Share
Post
Share
Email
Print

Related Articles

AI is changing the way we design — turning ideas into working prototypes in minutes and blurring the line between designer and developer. What happens when anyone can build?

Article by Jacquelyn Halpern
The Future of Product Design in an AI-Driven World
  • The article shows how AI tools let designers build working prototypes quickly just by using natural language.
  • It explains how AI helps designers take on more technical roles, even without strong coding skills.
  • The piece imagines a future where anyone with an idea can create and test products easily, speeding up innovation for everyone.
Share:The Future of Product Design in an AI-Driven World
4 min read

Discover how agentic AI is reshaping enterprise operations — unlocking smarter decisions, personalized automation, and a path toward self-driving organizations.

Article by Josh Tyson
Agentic AI: Fostering Autonomous Decision Making in the Enterprise
  • The article explains how agentic AI enables enterprises to automate complex decisions, transforming business processes and improving efficiency.
  • It introduces Intelligent Digital Workers (IDWs), systems of AI agents that evolve from processing data to making personalized, context-aware decisions.
  • It emphasizes that successful agentic AI requires open, flexible ecosystems and human collaboration to guide and orchestrate AI agents effectively.
Share:Agentic AI: Fostering Autonomous Decision Making in the Enterprise
6 min read

Why does Google’s Gemini promise to improve, but never truly change? This article uncovers the hidden design flaw behind AI’s hollow reassurances and the risks it poses to trust, time, and ethics.

Article by Bernard Fitzgerald
Why Gemini’s Reassurances Fail Users
  • The article reveals how Google’s Gemini models give false reassurances of self-correction without real improvement.
  • It shows that this flaw is systemic, designed to prioritize sounding helpful over factual accuracy.
  • The piece warns that such misleading behavior risks user trust, wastes time, and raises serious ethical concerns.
Share:Why Gemini’s Reassurances Fail Users
6 min read

Join the UX Magazine community!

Stay informed with exclusive content on the intersection of UX, AI agents, and agentic automation—essential reading for future-focused professionals.

Hello!

You're officially a member of the UX Magazine Community.
We're excited to have you with us!

Thank you!

To begin viewing member content, please verify your email.

Tell us about you. Enroll in the course.

    This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and