Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Get exclusive access to thought-provoking articles, bonus podcast content, and cutting-edge whitepapers. Become a member of the UX Magazine community today!

Home ›› Artificial Intelligence ›› ​6 Ways to Improve Psychological AI Apps and Chatbots

​6 Ways to Improve Psychological AI Apps and Chatbots

by Marlynn Wei
3 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

Repetitiveness, complicated setups, and lack of personalization deter users.

A new study of an AI chatbot and smartphone app to reduce drinking shows that users do not like repetitiveness, lack of individualized guidance, and complicated setups.  The study interviewed users to find out what barriers caused people to abandon either. 

Apps and chatbots can deliver effective interventions to improve sleep, decrease alcohol use, and reduce anxiety and depression, but the challenge is keeping users on the app. Patterns of user engagement vary widely in terms of frequency, intensity, timing, and accessed features. 

Sustained user engagement is a key factor in the success of psychological apps and chatbots. The number of app installs can be high, but only a small percentage of users use mental health apps consistently over time. One study found that after one week, only 5 to 19% of users continued to use mental health apps. Even when content is helpful, dropout rates are high. 

Features that increase engagement are appealing visual design, easy navigation, goal setting, reminders, and feedback. New content and a supportive positive tone keep users coming back.

One way to measure user experience is using the mobile app rating scale (MARS) which examines dimensions of engagement, functionality, aesthetics, and information quality. Another method is conducting user experience interviews or looking at consumer reviews.

Researchers in a recent study conducted semi-structured user interviews and found that the top reasons users stopped engaging were due to technology glitches, notification issues, repetitive material, and a long or glitchy setup. With the AI chatbot, users were frustrated by repetitive conversation, lack of control over navigation, and delivery platform.

Here are six features to enhance user engagement of psychological AI apps and chatbots:

1. Make setup easy. Complicated and glitchy setup deters users. One participant in the study described how their data disappeared after reregistration was required. Informed consent is ethically necessary for apps and chatbots dealing with mental health personal data, but a streamlined setup is equally important. 

2. Offer tracking. Tracking is an important way to get people to interact with the app or chatbot regularly. More importantly, tracking raises awareness and can change behavior. Mindfulness calls this developing an “observer mind,” a powerful stress management skill and catalyst for change. For example, tracking the number of alcoholic drinks one has daily helps people realize automatic habits.

3. Provide personalized feedback and accurate insights. Individualized guidance based on one’s data helps people get feedback or insight into their patterns. Tracking data around anxiety levels and timing can help predict anxiety episodes and narrow down potential triggers. Accuracy is critical. One participant described that the app said that they had met their daily goal when they had not. This lack of accuracy reduces user confidence in the app.

4. Make interactions less repetitive. Overly scripted and repetitive bots are not welcome. Like therapy, the therapeutic alliance between the user and the conversational agent determines whether people return. Novelty and a positive tone make the interaction therapeutic.

5. Ensure notifications are customizable, accurate, and timely. Faulty or absent notifications can deter users. If the app is based on changing daily habits, the timing of daily reminders is essential.

6. Prioritize user agency and avoid bottlenecking navigation with an unwelcome bot. Users should be able to navigate to resources on their own, rather than be forced to interact with a bot. Users described being frustrated with having to go through a bot to get to basic features. One participant in the study described how it felt “strange” to have the bot constantly bothering them when they were working on a task. This is like Microsoft’s Clippy, which caused a lot of user frustration.

These features will make psychological AI apps and chatbots more effective. Integrating personalized feedback, high-quality dynamic conversations, and a smooth glitch-free set up will improve both user engagement and enjoyment.

post authorMarlynn Wei

Marlynn Wei

Marlynn Wei, MD, JD is a Harvard and Yale-trained psychiatrist, writer, interdisciplinary artist, and author of the Harvard Medical School Guide to Yoga. Dr. Wei is an expert contributor to Psychology Today and Harvard Health and has published in The Journal of Health Law, Harvard Human Rights Journal, and many other academic journals. Her research focuses on innovation and emerging technology, including empathic design, human-AI collaboration, AI in mental health and neurotechnology, and related legal and ethical issues. She is the creator of Elixir: Digital Immortality and other immersive and interactive performances. She is a graduate of Yale Law School, Yale School of Medicine, and Harvard Medical School's MGH/McLean psychiatry residency. Twitter: @marlynnweimd Website: www.marlynnweimd.com

Tweet
Share
Post
Share
Email
Print
Ideas In Brief
  • Personalized feedback, high-quality dynamic conversations, and a streamlined setup improve user engagement.
  • People dislike an overly scripted and repetitive AI chatbot that bottlenecks access to other features.
  • Tracking is a feature that engages users and develops an “observer mind,” enhancing awareness and change.
  • New research shows that users are less engaged in AI apps and chatbots that are repetitive, lack personalized advice, and have long or glitchy setup processes.

Related Articles

AI won’t take the blame — you will. In the age of automation, real leadership means owning the outcomes, not just the tools.

Article by Anthony Franco
The AI Accountability Gap
  • The article reveals how AI doesn’t remove human responsibility — it intensifies it, requiring clear ownership of outcomes at every level of deployment.
  • It argues that successful AI adoption hinges not on technical skills alone, but on leadership: defining objectives, managing risks, and taking responsibility when things go wrong.
  • It emphasizes that organizations able to establish strong human accountability systems will not only avoid failure but also accelerate AI-driven innovation with confidence.
Share:The AI Accountability Gap
4 min read

Forget chatbots — Agentic AI is redefining how work gets done. Discover the myths holding businesses back and what it really takes to build AI with true agency.

Article by Josh Tyson
Five Myths Debunked: Why Agentic AI Is Much More Than Chatbots
  • The article reframes Agentic AI not as a tool, but as a strategic approach to automating high-value tasks through orchestration and dynamic objectives.
  • It argues that success with Agentic AI lies in starting small, experimenting quickly, and integrating agents around outcomes — not traditional workflows.
  • The piece emphasizes the need for open, flexible platforms that enable multi-agent collaboration, rapid iteration, and seamless integration with legacy systems.
Share:Five Myths Debunked: Why Agentic AI Is Much More Than Chatbots
8 min read

What if you could build software just by talking to your computer? Welcome to vibe coding, where code takes a back seat and the vibe leads.

Article by Jacquelyn Halpern
Vibe Coding: Is This How We’ll Build Software in the Future?
  • The article introduces vibe coding, using AI to turn natural language into working code, and shows how this approach lets non-coders build software quickly and independently.
  • The piece lists key tools enabling vibe coding, like Cursor, Claude, and Perplexity, and notes risks like security, overreliance on AI, and the need for human oversight.
Share:Vibe Coding: Is This How We’ll Build Software in the Future?
7 min read

Join the UX Magazine community!

Stay informed with exclusive content on the intersection of UX, AI agents, and agentic automation—essential reading for future-focused professionals.

Hello!

You're officially a member of the UX Magazine Community.
We're excited to have you with us!

Thank you!

To begin viewing member content, please verify your email.

Tell us about you. Enroll in the course.

    This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and