Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Home ›› Artificial Intelligence ›› ​6 Ways to Improve Psychological AI Apps and Chatbots

​6 Ways to Improve Psychological AI Apps and Chatbots

by Marlynn Wei
3 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

Repetitiveness, complicated setups, and lack of personalization deter users.

A new study of an AI chatbot and smartphone app to reduce drinking shows that users do not like repetitiveness, lack of individualized guidance, and complicated setups.  The study interviewed users to find out what barriers caused people to abandon either. 

Apps and chatbots can deliver effective interventions to improve sleep, decrease alcohol use, and reduce anxiety and depression, but the challenge is keeping users on the app. Patterns of user engagement vary widely in terms of frequency, intensity, timing, and accessed features. 

Sustained user engagement is a key factor in the success of psychological apps and chatbots. The number of app installs can be high, but only a small percentage of users use mental health apps consistently over time. One study found that after one week, only 5 to 19% of users continued to use mental health apps. Even when content is helpful, dropout rates are high. 

Features that increase engagement are appealing visual design, easy navigation, goal setting, reminders, and feedback. New content and a supportive positive tone keep users coming back.

One way to measure user experience is using the mobile app rating scale (MARS) which examines dimensions of engagement, functionality, aesthetics, and information quality. Another method is conducting user experience interviews or looking at consumer reviews.

Researchers in a recent study conducted semi-structured user interviews and found that the top reasons users stopped engaging were due to technology glitches, notification issues, repetitive material, and a long or glitchy setup. With the AI chatbot, users were frustrated by repetitive conversation, lack of control over navigation, and delivery platform.

Here are six features to enhance user engagement of psychological AI apps and chatbots:

1. Make setup easy. Complicated and glitchy setup deters users. One participant in the study described how their data disappeared after reregistration was required. Informed consent is ethically necessary for apps and chatbots dealing with mental health personal data, but a streamlined setup is equally important. 

2. Offer tracking. Tracking is an important way to get people to interact with the app or chatbot regularly. More importantly, tracking raises awareness and can change behavior. Mindfulness calls this developing an “observer mind,” a powerful stress management skill and catalyst for change. For example, tracking the number of alcoholic drinks one has daily helps people realize automatic habits.

3. Provide personalized feedback and accurate insights. Individualized guidance based on one’s data helps people get feedback or insight into their patterns. Tracking data around anxiety levels and timing can help predict anxiety episodes and narrow down potential triggers. Accuracy is critical. One participant described that the app said that they had met their daily goal when they had not. This lack of accuracy reduces user confidence in the app.

4. Make interactions less repetitive. Overly scripted and repetitive bots are not welcome. Like therapy, the therapeutic alliance between the user and the conversational agent determines whether people return. Novelty and a positive tone make the interaction therapeutic.

5. Ensure notifications are customizable, accurate, and timely. Faulty or absent notifications can deter users. If the app is based on changing daily habits, the timing of daily reminders is essential.

6. Prioritize user agency and avoid bottlenecking navigation with an unwelcome bot. Users should be able to navigate to resources on their own, rather than be forced to interact with a bot. Users described being frustrated with having to go through a bot to get to basic features. One participant in the study described how it felt “strange” to have the bot constantly bothering them when they were working on a task. This is like Microsoft’s Clippy, which caused a lot of user frustration.

These features will make psychological AI apps and chatbots more effective. Integrating personalized feedback, high-quality dynamic conversations, and a smooth glitch-free set up will improve both user engagement and enjoyment.

post authorMarlynn Wei

Marlynn Wei

Marlynn Wei, MD, JD is a Harvard and Yale-trained psychiatrist, writer, interdisciplinary artist, and author of the Harvard Medical School Guide to Yoga. Dr. Wei is an expert contributor to Psychology Today and Harvard Health and has published in The Journal of Health Law, Harvard Human Rights Journal, and many other academic journals. Her research focuses on innovation and emerging technology, including empathic design, human-AI collaboration, AI in mental health and neurotechnology, and related legal and ethical issues. She is the creator of Elixir: Digital Immortality and other immersive and interactive performances. She is a graduate of Yale Law School, Yale School of Medicine, and Harvard Medical School's MGH/McLean psychiatry residency. Twitter: @marlynnweimd Website: www.marlynnweimd.com

Tweet
Share
Post
Share
Email
Print
Ideas In Brief
  • Personalized feedback, high-quality dynamic conversations, and a streamlined setup improve user engagement.
  • People dislike an overly scripted and repetitive AI chatbot that bottlenecks access to other features.
  • Tracking is a feature that engages users and develops an “observer mind,” enhancing awareness and change.
  • New research shows that users are less engaged in AI apps and chatbots that are repetitive, lack personalized advice, and have long or glitchy setup processes.

Related Articles

Is true consciousness in computers a possibility, or merely a fantasy? The article delves into the philosophical and scientific debates surrounding the nature of consciousness and its potential in AI. Explore why modern neuroscience and AI fall short of creating genuine awareness, the limits of current technology, and the profound philosophical questions that challenge our understanding of mind and machine. Discover why the pursuit of conscious machines might be more about myth than reality.

Article by Peter D'Autry
Why Computers Can’t Be Conscious
  • The article examines why computers, despite advancements, cannot achieve consciousness like humans. It challenges the assumption that mimicking human behavior equates to genuine consciousness.
  • It critiques the reductionist approach of equating neural activity with consciousness and argues that the “hard problem” of consciousness remains unsolved. The piece also discusses the limitations of both neuroscience and AI in addressing this problem.
  • The article disputes the notion that increasing complexity in AI will lead to consciousness, highlighting that understanding and experience cannot be solely derived from computational processes.
  • It emphasizes the importance of physical interaction and the lived experience in consciousness, arguing that AI lacks the embodied context necessary for genuine understanding and consciousness.
Share:Why Computers Can’t Be Conscious
18 min read

AI is transforming financial inclusion for rural entrepreneurs by analyzing alternative data and automating community lending. Learn how these advancements open new doors for the unbanked and empower local businesses.

Article by Thasya Ingriany
AI for the Unbanked: How Technology Can Empower Rural Entrepreneurs
  • The article explores how AI can enhance financial systems for the unbanked by using alternative data to create accessible, user-friendly credit profiles for rural entrepreneurs.
  • It analyzes how AI can automate group lending practices, improve financial inclusion, and support rural entrepreneurs by strengthening community-driven financial networks like “gotong royong”.
Share:AI for the Unbanked: How Technology Can Empower Rural Entrepreneurs
5 min read

Curious about the future of AI? Discover how OpenAI’s “Strawberry” could transform LLMs with advanced reasoning and planning, tackling current limitations and bringing us closer to AGI. Find out how this breakthrough might redefine AI accuracy and reliability.

Article by Andrew Best
Why OpenAI’s “Strawberry” Is a Game Changer
  • The article explores how OpenAI’s “Strawberry” aims to enhance LLMs with advanced reasoning, overcoming limitations like simple errors and bringing us closer to AGI.
  • It investigates how OpenAI’s “Strawberry” might transform AI with its ability to perform in-depth research and validation, improving the reliability of AI responses.
Share:Why OpenAI’s “Strawberry” Is a Game Changer
3 min read

Tell us about you. Enroll in the course.

    This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and