Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Get exclusive access to thought-provoking articles, bonus podcast content, and cutting-edge whitepapers. Become a member of the UX Magazine community today!

Home ›› Artificial Intelligence ›› How Can User Experience Research (UXR) Help Build Users Trust in AI Systems and Increase Engagement?

How Can User Experience Research (UXR) Help Build Users Trust in AI Systems and Increase Engagement?

by Celine Lenoble
4 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

With ML facing so many users, there is a case to approach the conception and design of ML-powered applications from a UX research perspective. Read more to find out.

Today, Artificial Intelligence (AI) and, more specifically, Machine Learning are pervasive in our daily lives. From Facebook ads to YouTube recommendations; from Siri to Google Assistant; and from automated translation of device notice to marketing personalization tools; AI now deeply permeates both our work and personal lives.

This article is the first in a series of three that advocate for renewed UX research efforts in ML apps.

With ML facing so many users, there is a case to approach the conception and design of ML-powered applications from a UX research perspective.

This lies on three main reasons:

  1. Mental models of users haven’t caught up with how ML and AI truly work.
  • UXR can uncover existing mental mentals and help design new ones that are more suited to this new tech.

2. ML and AI can have an insidious and deep impact on all users’ lives

  • UXR reveals the myriad of intended and unintended effects of apps on people’s life — and help build more ethical AI.

3. ML and AI can have disparate impacts on individuals based on their ethnicity, religion, gender, sexual orientation:

  • UXR can also help address some of the sources of bias.

In this episode, we will focus on the first reason: How can UXR help build trust in AI systems and increase users’ engagement?

ML and Real Users

Users’ attitudes towards ML-powered apps are complex. Algorithm aversion has been well studied and documented:

In a wide variety of forecasting domains, experts and laypeople remain resistant to using algorithms, often opting to use forecasts made by an inferior human rather than forecasts made by a superior algorithm. Indeed, research shows that people often prefer humans’ forecasts to algorithms’ forecasts (Diab, Pui, Yankelevich, & Highhouse, 2011; Eastwood, Snook, & Luther, 2012), more strongly weigh human input than algorithmic input (Önkal, Goodwin, Thomson, Gönül, & Pollock, 2009; Promberger & Baron, 2006), and more harshly judge professionals who seek out advice from an algorithm rather than from a human (Shaffer, Probst, Merkle, Arkes, & Medow, 2013).

Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1), 114–126. 

However, their research shows that this algorithm aversion phenomenon appears only once humans witness, or are made aware of, forecasting errors. In 2019, Logg J.M., Minson J.A., Moore D. A. demonstrated the contrary, that humans show an initial appreciation towards algorithm advice compared to fellow humans:

Our participants relied more on identical advice when they thought it came from an algorithm than from other people. They displayed this algorithm appreciation when making visual estimates and when predicting: geopolitical and business events, the popularity of songs, and romantic attraction. Additionally, they chose algorithmic judgment over human judgment when given the choice. They even showed a willingness to choose algorithmic advice over their own judgment.

Logg J.M., Minson J.A., Moore D. A. (2019). Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, vol. 151, March 2019, 90–103.

ML and The Theory of Machine

One possible explanation still being investigated is “The Theory of Machine” (equivalent to the “Theory of Mind” for humans) that people operate with. The Theory of Machine, or more simply, mental models, as designers call it, is a series of assumptions humans make on how an application works internally.

One such assumption is the idea of a fixed mindset. Having a fixed mindset in psychology means you believe people have a certain amount of intelligence or skills, and they can’t do anything to increase that amount. Applied to a Theory of Machine, it means that people believe that a computer program output is fully determined by the initial input and not capable of learning or evolving.

The fixed mindset applied towards traditional software was appropriate for a long time. Your typical software, word processor, or spreadsheet was not capable to improve on its own and learn from its mistakes. The user might expect changes following an update, but otherwise, they expect the program to behave consistently over time.

When confronted with ML-powered applications, users continue to apply the classic fixed mindset mental model. So, once they experience what they perceive as the app making a mistake, they completely lose trust in the system’s ability to give accurate results. This is possibly what triggers the shift to algorithm aversion, after an initial appreciation.

Numerous ML apps presents themselves as an assistant. They draw on the mental model of a relationship with a person, hoping to change the assumptions users make on how the program works.

This choice of mental model presents several challenges:

  • AI is not (yet) powerful enough to pass for a human: Users’ expectations are shaped by how they expect a human to respond, and users typically end up extremely disappointed, if not infuriated, by the AI behavior.
  • Even for their fellow humans, people tend to apply a fixed mindset and rarely allow for the possibility of growth and change in capabilities, at least not in any short time frame.
  • If users do have a growth mindset in relation to humans, meaning that they believe humans can improve provided they are given the opportunities to learn or they are taught what to do, this mindset doesn’t transfer well to AI assistants, because the learning modalities of humans and AI are so different.

Mental Model and User Engagement with ML Apps

What mental model should you use then? There is no one-size-fits-all answer to this question. This is where User Experience Research is required:

  • to uncover the existing mental models associated with specific tasks,
  • to experiment with multiple UI metaphors beyond the assistant, and
  • to help users adjust their existing mental models and expectations to the reality of ML-powered apps.
post authorCeline Lenoble

Celine Lenoble

I am Director of UX Research at brainlabs where I lead a team of CRO analysts and UX researchers with diverse backgrounds. I believe in a holistic approach to UX research and design combining all perspectives: Human Computer Interaction, design thinking, psychology, sociology, anthropology and all methods: from big data to ethnographic study. I am particularly interested in the UX of ML-powered products & services. Disclaimer: opinions represented here are personal and do not represent those of brainlabs.

Tweet
Share
Post
Share
Email
Print
Ideas In Brief
  • The article covers the conception and design of ML-powered applications from a UX research perspective.
  • The author unpacks the following ideas:
    • Machine Learning and Real Users
    • Machine Learning and The Theory of Machine
    • Mental Models and User Engagement with Machine Learning Apps

Related Articles

Learn how understanding user emotions can create intuitive, supportive designs that build trust and loyalty.

Article by Pavel Bukengolts
The Role of Emotion in UX: Embracing Emotionally Intelligent Design
  • The article emphasizes that emotionally intelligent design is key to creating meaningful UX that satisfies users and drives business success.
  • It shows how understanding users’ emotions — through research, empathy mapping, journey mapping, and service blueprinting — can reveal hidden needs and shape more intuitive, reassuring digital experiences.
  • The piece argues that embedding empathy and emotional insights into design strengthens user engagement, loyalty, and overall satisfaction.
Share:The Role of Emotion in UX: Embracing Emotionally Intelligent Design
5 min read

As AI takes on more of the solution work, the real craft of design shifts to how we frame the problem. This piece explores why staying with uncertainty and resisting the urge to rush to answers may be a designer’s most powerful skill.

Article by Morteza Pourmohamadi
The Frame, the Illusion, and the Brief
  • The article highlights that as AI takes over more of the solution work, the designer’s true craft lies in framing the problem rather than rushing to solve it.
  • It shows how cognitive biases like the need for closure or action bias can distort our perception, making careful problem framing essential for clarity and creativity.
  • The piece argues that framing is itself a design act — a practice of staying with uncertainty long enough to cultivate shared understanding and more meaningful outcomes.
Share:The Frame, the Illusion, and the Brief
3 min read

UX isn’t just about screens — it’s about feelings. This article explores why the future of UX depends on blending artificial and emotional intelligence to create truly human experiences.

Article by Krystian M. Frahn
UX is More Than Screens: The Art of Designing Emotions
  • The article shows how Steve Jobs’ shift from “form follows function” to “form follows emotion” transformed design into a deeply human practice centered on empathy.
  • It explains that emotions drive perception, usability, and loyalty — making emotional intelligence essential to meaningful user experiences.
  • The piece argues that the future of UX lies in uniting artificial and emotional intelligence to create technology that feels truly human.
Share:UX is More Than Screens: The Art of Designing Emotions
6 min read

Join the UX Magazine community!

Stay informed with exclusive content on the intersection of UX, AI agents, and agentic automation—essential reading for future-focused professionals.

Hello!

You're officially a member of the UX Magazine Community.
We're excited to have you with us!

Thank you!

To begin viewing member content, please verify your email.

Get Paid to Test AI Products

Earn an average of $100 per test by reviewing AI-first product experiences and sharing your feedback.

    Tell us about you. Enroll in the course.

      This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and