Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Get exclusive access to thought-provoking articles, bonus podcast content, and cutting-edge whitepapers. Become a member of the UX Magazine community today!

Home ›› Artificial Intelligence ›› How User Experience Research (UXR) Cam Help Build Responsible AI Applications

How User Experience Research (UXR) Cam Help Build Responsible AI Applications

by Celine Lenoble
3 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

How User Experience Research (UXR) Can Help Build Responsible AI Applications?

Learn ways user experience research can help build responsible AI applications.

 

Today, Artificial Intelligence (AI) and, more specifically, Machine Learning are pervasive in our daily lives. From Facebook ads to YouTube recommendations, from Siri to Google Assistant, and from an automated translation of device notice to marketing personalization tools, AI now deeply permeates both our work and personal lives.

This article is the second in a series of three (catch up on the first installment here and third here) that advocate for renewed UX research efforts in ML apps.

With ML facing so many users, there is a case to approach the conception and design of ML-powered applications from a UX research perspective.

This lies on three main reasons:

1. Mental models of users haven’t caught up with how ML and AI truly work:

  • UXR can uncover existing mental mentals and help design new ones that are more suited to this new tech.

2. ML and AI can have an insidious and deep impact on all users’ lives:

  • UXR reveals the myriad of intended and unintended effects of apps on people’s life — and help build more ethical AI.

3. ML and AI can have disparate impacts on individuals based on their ethnicity, religion, gender, sexual orientation:

  • UXR can also help address some of the sources of bias.

In this episode, we will focus on the second reason:

How to assess the impact of ML-apps on users and meet AI ethical standards?

Most ML algorithms are supposed to assist humans in their decision-making process but not make the decision themselves. However, more and more, AI systems do not content themselves in making recommendations — they make decisions. This is the case from sifting through resumes to selecting which neighborhoods to patrol for the next policy shift.

Given the scale AI systems operate, the potential impacts on specific individuals, groups, or even society, is deep and wide. While harmful human practices have always existed, they have evolved alongside social and legal guidelines that mitigate them. Not so much for AI-driven systems — yet.

Ethics in AI

Research on ethical AI has been conducted by academic researchers, industries, or governments, and they have produced a series of guidelines.

Microsoft lists 6 principles for a responsible AI: Fairness, reliability & safety, privacy & security, inclusiveness, transparency, and accountability.

And a High-Level Expert Group on Artificial Intelligence published Ethics Guidelines for Trustworthy AI for the European Union in 2019 that explains:

Develop, deploy and use AI systems in a way that adheres to the ethical principles of: respect for human autonomy, prevention of harm, fairness and explicability. Acknowledge and address the potential tensions between these principles.

While the research on these high-level principles is well engaged, there is a gap between policies and guidelines and their implementation. When should data scientists concern themselves with these considerations? How can product managers integrate them into their roadmap? What practical steps can they take to ensure their app will be responsible and trustworthy?

The Partnership for AI, a non-profit that includes industry leaders, universities, and civil society groups, acknowledges that all AI stakeholders need to be actively involved in preventing potential harm resulting from such research:

Through our work, it has become clear that effectively anticipating and mitigating downstream consequences of AI research requires community-wide effort; it cannot be the responsibility of any one group alone. The AI research ecosystem includes both industry and academia, and comprises researchers, engineers, reviewers, conferences, journals, grantmakers, team leads, product managers, administrators, communicators, institutional leaders, data scientists, social scientists, policymakers, and others.

post authorCeline Lenoble

Celine Lenoble

I am Director of UX Research at brainlabs where I lead a team of CRO analysts and UX researchers with diverse backgrounds. I believe in a holistic approach to UX research and design combining all perspectives: Human Computer Interaction, design thinking, psychology, sociology, anthropology and all methods: from big data to ethnographic study. I am particularly interested in the UX of ML-powered products & services. Disclaimer: opinions represented here are personal and do not represent those of brainlabs.

Tweet
Share
Post
Share
Email
Print
Ideas In Brief
  • UXR reveals the myriad of intended and unintended effects of apps on people’s life — and help build more ethical AI
  • The author talks about:
    • How to assess the impact of ML-apps on users and meet AI ethical standards?
    • Ethics in AI

Related Articles

Learn how understanding user emotions can create intuitive, supportive designs that build trust and loyalty.

Article by Pavel Bukengolts
The Role of Emotion in UX: Embracing Emotionally Intelligent Design
  • The article emphasizes that emotionally intelligent design is key to creating meaningful UX that satisfies users and drives business success.
  • It shows how understanding users’ emotions — through research, empathy mapping, journey mapping, and service blueprinting — can reveal hidden needs and shape more intuitive, reassuring digital experiences.
  • The piece argues that embedding empathy and emotional insights into design strengthens user engagement, loyalty, and overall satisfaction.
Share:The Role of Emotion in UX: Embracing Emotionally Intelligent Design
5 min read

As AI takes on more of the solution work, the real craft of design shifts to how we frame the problem. This piece explores why staying with uncertainty and resisting the urge to rush to answers may be a designer’s most powerful skill.

Article by Morteza Pourmohamadi
The Frame, the Illusion, and the Brief
  • The article highlights that as AI takes over more of the solution work, the designer’s true craft lies in framing the problem rather than rushing to solve it.
  • It shows how cognitive biases like the need for closure or action bias can distort our perception, making careful problem framing essential for clarity and creativity.
  • The piece argues that framing is itself a design act — a practice of staying with uncertainty long enough to cultivate shared understanding and more meaningful outcomes.
Share:The Frame, the Illusion, and the Brief
3 min read

UX isn’t just about screens — it’s about feelings. This article explores why the future of UX depends on blending artificial and emotional intelligence to create truly human experiences.

Article by Krystian M. Frahn
UX is More Than Screens: The Art of Designing Emotions
  • The article shows how Steve Jobs’ shift from “form follows function” to “form follows emotion” transformed design into a deeply human practice centered on empathy.
  • It explains that emotions drive perception, usability, and loyalty — making emotional intelligence essential to meaningful user experiences.
  • The piece argues that the future of UX lies in uniting artificial and emotional intelligence to create technology that feels truly human.
Share:UX is More Than Screens: The Art of Designing Emotions
6 min read

Join the UX Magazine community!

Stay informed with exclusive content on the intersection of UX, AI agents, and agentic automation—essential reading for future-focused professionals.

Hello!

You're officially a member of the UX Magazine Community.
We're excited to have you with us!

Thank you!

To begin viewing member content, please verify your email.

Get Paid to Test AI Products

Earn an average of $100 per test by reviewing AI-first product experiences and sharing your feedback.

    Tell us about you. Enroll in the course.

      This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and