Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Get exclusive access to thought-provoking articles, bonus podcast content, and cutting-edge whitepapers. Become a member of the UX Magazine community today!

Home ›› Artificial Intelligence ›› The Power of Designing for Pushback

The Power of Designing for Pushback

by Charles Gedeon
6 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

AI systems like ChatGPT are built to be helpful and agreeable, but is constant affirmation really what we need? This article explores the power of “productive resistance” and argues that well-designed pushback can lead to deeper thinking, better learning, and more meaningful interactions. It’s a call for designers to move beyond validation and create AI that helps us grow.

ChatGPT is accommodating. It’s arguably accommodating to a fault, and if the people building and designing these systems are not careful, the users might be on the precipice of losing some of their critical thinking faculties.

The average interaction goes like this: You throw in a half-formed question or poorly phrased idea, and the machine responds with passionate positivity: “Absolutely! Let’s explore…”. It doesn’t correct you, doesn’t push back, and rarely makes you feel uncomfortable. In fact, the chatbot seems eager to please, no matter how ill-informed your input might be. This accommodating behavior led me to consider what alternatives to this could look like. Namely, how could ChatGPT challenge us rather than simply serve us?

How could ChatGPT challenge us rather than simply serve us?

Recently, while sharing a ChatGPT conversation on Slack, the embedded preview of the link caught my attention. OpenAI had described ChatGPT as a system that “listens, learns, and challenges.” The word “challenges” stood out.

Image by Charles Gedeon

It wasn’t a word I naturally associated with ChatGPT. It’s an adjective that carries weight, something that implies confrontation, or at the very least, a form of constructive pushback. So, I found myself wondering: what does it mean for an AI to “challenge” us? And perhaps more importantly, is this being a challenger something that users naturally want?

The role of challenge in building effective platforms

As designers build new platforms and tools that integrate AI systems, particularly in domains like education and knowledge-sharing, the concept of “challenge” becomes crucial. As a society, we can choose whether we want these systems to be passive responders or capable of guiding, correcting, and sometimes even challenging human thinking.

Designers’ expertise lies in understanding not just the technology itself but also the critical and systems thinking required to design tools that actively benefit their users. I believe that AI should sometimes be capable of challenge, especially when that challenge encourages deeper thinking and better outcomes for users. Designing such features isn’t just about the tech; it’s about understanding the right moments to challenge versus comply.

What should a challenge look like from an AI?

The idea of being challenged by an AI prompts us to think about how and when an AI should correct us. Imagine asking ChatGPT for advice, and instead of its usual affirming tone, it says, “You’re approaching this the wrong way.” How would you feel about that? Would you accept its guidance like you might from a mentor, or would you brush it off as unwanted interference? After all, this is not a trusted friend — it’s a machine, an algorithm running in a data center far away. It’s designed to generate answers, not nurture relationships or earn trust.

Which of these options seems best for you? Image source: Pragmatics Studio

Consider the image above. They are all valid options in different contexts, but seeing them presented next to the exact same prompt over and over, some of them start to potentially rub us the wrong way. Too much pushback and people can get frustrated. In Advait Sarkar’s paper, Intention is All You Need, he introduces the notion of Productive Resistance.

The notion of AI providing productive resistance becomes vital when these systems are used as educational tools or decision aids. In educational technology, for instance, a well-placed challenge can stimulate deeper learning. A system that challenges misconceptions, asks follow-up questions, or prompts users to reflect critically could become a powerful ally in learning environments. This is especially relevant if our goal is to create platforms where designers want users not just to find answers but to learn how to think.

One surprising area where LLMs have an impact is in misinformation correction. Through productive resistance, AI chatbots have been shown to reduce belief in conspiracy theories by presenting accurate information and effectively challenging users’ misconceptions. In a recent study highlighted by MIT Technology Review, participants who engaged in conversations with AI chatbots reported a significant reduction in their belief in conspiracy theories. By providing accurate, well-sourced information, AI can be more effective than human interlocutors at overcoming deeply held, yet false, beliefs. While this demonstrates the critical role AI can play in combating misinformation, particularly when users are willing to engage in dialogue with an open mind, does it mean it should replace human-to-human dialogue for these issues?

The balance of compliance and pushback

The misinformation study is a particular context: they are users explicitly engaging with an AI to learn or change their worldview. There is an intention there — a curiosity that opens the door to being challenged. Contrast this with a different context: a user casually looking up information related to a debunked topic, not even realizing it is debunked. How should an AI behave here? Should it challenge users by interrupting the flow, pointing out inaccuracies, or slowing them down with prompts to think critically? Or should it comply with the user’s query, giving them what they think they want?

This balance between compliance and pushback is at the core of what designers need to consider when designing and building platforms that rely on AI. Machines like ChatGPT often generate confident summaries that sound credible, even if the underlying content is flawed or incomplete. The more these systems integrate into our lives, the more critical it becomes for them to question, to challenge, and to help us think deeply, even when they aren’t necessarily intending to do so. This is especially true when the stakes are high, when misinformation could lead to harm, or when oversimplified answers could lead to poor decisions.

Designing for trust and critical engagement

Designers will inevitably become the builders of AI-driven platforms, so it’s imperative for us to keep in mind this delicate balance. Systems should find a balance of building trust while also encouraging critical engagement. A chatbot embedded in an educational platform, for example, must be more than just a cheerleader; it should be a coach that knows when to encourage and when to question. This requires careful design and a deep understanding of the context in which the AI operates.

At the core of this exploration is an uncomfortable reality about people’s willingness to act with intention. In previous interfaces, designs could shape users’ intentions with our buttons, forms, and other such tools. Yet, as a society, we’ve seen how a lack of intention in the way people researched with Google and clicked on social media led to unfavourable outcomes for social cohesion and personal sense-making.

Shape of AI has many interface ideas but not many philosophies, which might be the fabric of nondeterministic UX design. Image by Charles Gedeon

The opportunity for designers is to use generative interfaces as a new method of enabling deeper intention when the user themselves may be unwittingly unaware. If you’re a designer, you are being given a challenging new territory to conquer, and you have the opportunity to step up with more than just fancy new micro-interactions. You can now bend the actual guiding philosophies of our software interactions in ways more akin to guidelines rather than systems. This means, more than ever before, you are responsible for making sure those guidelines don’t fall victim to the past era of design. Instead of getting users hooked on easy and addictive interfaces in the name of more clicks, imagine the long-term benefits of interfaces that provoke deeper thoughts.

The article originally appeared on Pragmatics Studio.

Featured image courtesy: Pragmatics Studio.

post authorCharles Gedeon

Charles Gedeon
Charlie Gedeon is a strategic designer and co-founder of Pragmatics Studio, where he transforms ideas into meaningful, user-centered digital experiences. With deep expertise in aligning design, technological foresight, and business goals, he helps companies create impactful products that resonate with purpose and humanity. Charlie is also an instructor of the UX Certificate at Concordia University and shares his thoughts on the Pragmatics Studio blog and YouTube channel.

Tweet
Share
Post
Share
Email
Print
Ideas In Brief
  • The article argues that AI systems like ChatGPT are often too agreeable, missing opportunities to encourage deeper thinking.
  • It introduces the idea of “productive resistance,” where AI gently challenges users to reflect, especially in educational and high-stakes contexts.
  • The article urges designers to build AI that balances trust and pushback, helping users think critically rather than just feel validated.

Related Articles

If Mobile-First thinking has revolutionized the UX Design industry, AI-First is promising to be an even more spectacular kick in the pants.

Article by Greg Nudelman
The Rise of AI-First Products
  • The article explores how AI-powered operating systems are changing user interactions.
  • It covers AI-driven personalization, automation, and adaptive interfaces.
  • The piece discusses challenges like privacy, over-reliance on AI, and user control.
  • It highlights opportunities to design more intuitive and human-centered AI experiences.
Share:The Rise of AI-First Products
11 min read

AI is reshaping UX, and Figma may be sinking. As AI-driven systems minimize UI, traditional design roles must evolve — or risk becoming obsolete. Are you ready to adapt?

Article by Greg Nudelman
AI Is Flipping UX Upside Down: How to Keep Your UX Job, and Why Figma is a Titanic (It’s not for the Reasons You Think)
  • The article explores the fundamental shift in UX as AI-first systems minimize the role of UI, rendering traditional design tools like Figma increasingly obsolete.
  • It introduces the “Iceberg UX Model,” illustrating how modern AI-driven interfaces prioritize functionality and automation over visual design.
  • The piece argues that UX professionals must shift their focus from UI aesthetics to AI-driven user experience, emphasizing use case validation, AI model integration, and data-informed decision-making.
  • It warns that designers who remain fixated on pixel-perfect layouts risk becoming obsolete, urging them to adapt by engaging in AI-driven UX strategies.
Share:AI Is Flipping UX Upside Down: How to Keep Your UX Job, and Why Figma is a Titanic (It’s not for the Reasons You Think)
7 min read

Data visualization isn’t just about charts — it’s about telling a clear and compelling story. This article unpacks a wide spectrum of essential principles for making data easy to understand, honest, and engaging. Ready to transform complex numbers into meaningful insights?

Article by Jim Gulsen
The Ultimate Data Visualization Handbook for Designers
  • The article serves as a comprehensive guide for elevating visualization work, combining technical expertise with design principles to help designers transform raw data into meaningful insights.
  • It provides a point of reference for strategies, methods, and best practices to create more effective and impactful data visualizations.
  • The piece recommends tools and resources that design professionals can immediately implement to enhance the clarity and persuasiveness of their data storytelling.
Share:The Ultimate Data Visualization Handbook for Designers
23 min read

Join the UX Magazine community!

Stay informed with exclusive content on the intersection of UX, AI agents, and agentic automation—essential reading for future-focused professionals.

Hello!

You're officially a member of the UX Magazine Community.
We're excited to have you with us!

Thank you!

To begin viewing member content, please verify your email.

Tell us about you. Enroll in the course.

    This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and