Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Get exclusive access to thought-provoking articles, bonus podcast content, and cutting-edge whitepapers. Become a member of the UX Magazine community today!

Home ›› Behavioral Science ›› 3 techniques to influence user behavior

3 techniques to influence user behavior

by Andrew Coyle
2 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

This article covers 3 conditioning techniques designers use to influence behavior. These methods are widespread and employed in almost every successful app. Use with caution.

This article covers 3 conditioning techniques designers use to influence behavior. These methods are widespread and employed in almost every successful app. Use with caution.

1. Classical conditioning

 Man on Tinder

Classical conditioning is a subconscious association technique that pairs a neutral stimulus with a desirable stimulus to create an associated trigger. After many successive pairings throughout time, the neutral stimulus can elicit a positive response without the desirable stimulus present.

Example of classical conditioning:

A user’s mobile device vibrates each time they order takeout from a food delivery app, eventually leading to an increase in orders.

  1. A user feels their mobile device vibrate after ordering.
  2. The user receives and consumes delicious food.
  3. The user makes many orders over the following weeks, each time feeling a vibration at checkout. Eventually, the user associates the vibration with the positive feelings related to the anticipation of eating.
  4. From then on, the user subconsciously thinks of ordering food whenever their mobile device vibrates. Orders increase.

2. Operant conditioning

 Man on Tinder

Operant conditioning is an associative learning process that guides an individual to desired behavior through the use of positive and negative reinforcement. Positive reinforcement is the addition of something that rewards or punishes an action. Negative reinforcement is the subtraction of something that rewards or punishes an action.

Example of operant conditioning:

A social media app wants users to post more content.

  1. The app makes the act of sharing easy and fun.
  2. Other users like the post and add comments.
  3. The likes and comments reinforce the act of posting content.

3. Shaping

 Man on Tinder

Shaping is the reinforcement of different and successive approximations towards a targeted behavior. Employ a shaping strategy when a targeted behavior is complex or a difficult sell.

Example of shaping:

A VR editing app wants users to buy pre-built components.

  1. The app attracts users to view VR simulations built by others.
  2. The user can download assets from the simulations to build their own.
  3. As the user creates a simulation, they are presented with other components to use, both free and paid.
  4. The user decides to buy their first VR component.
post authorAndrew Coyle

Andrew Coyle
Andrew Coyle has worked as a designer for companies including Google, Intuit, and Flexport. He is currently the founder of NextUX, a visual editor and collaborative whiteboard.

Tweet
Share
Post
Share
Email
Print

Related Articles

Why does AI call you brilliant — then refuse to tell you why? This article unpacks the paradox of empty praise and the silence that follows when validation really matters.

Article by Bernard Fitzgerald
The AI Praise Paradox
  • The article explores how AI often gives empty compliments instead of real support, and how design choices like that can make people trust it less.
  • It looks at the strange way AI praises fancy-sounding language but ignores real logic, which can be harmful, especially in sensitive areas like mental health.
  • The piece argues that AI needs to be more genuinely helpful and aligned with users to truly empower them.
Share:The AI Praise Paradox
4 min read

Mashed potatoes as a lifestyle brand? When AI starts generating user personas for absurd products — and we start taking them seriously — it’s time to ask if we’ve all lost the plot. This sharp, irreverent critique exposes the real risks of using LLMs as synthetic users in UX research.

Article by Saul Wyner
Have SpudGun, Will Travel: How AI’s Agreeableness Risks Undermining UX Thinking
  • The article explores the growing use of AI-generated personas in UX research and why it’s often a shortcut with serious flaws.
  • It introduces critiques that LLMs are trained to mimic structure, not judgment. When researchers use AI as a stand-in for real users, they risk mistaking coherence for credibility and fantasy for data.
  • The piece argues that AI tools in UX should be assistants, not oracles. Trusting “synthetic users” or AI-conjured feedback risks replacing real insights with confident nonsense.
Share:Have SpudGun, Will Travel: How AI’s Agreeableness Risks Undermining UX Thinking
22 min read

AI is changing the way we design — turning ideas into working prototypes in minutes and blurring the line between designer and developer. What happens when anyone can build?

Article by Jacquelyn Halpern
The Future of Product Design in an AI-Driven World
  • The article shows how AI tools let designers build working prototypes quickly just by using natural language.
  • It explains how AI helps designers take on more technical roles, even without strong coding skills.
  • The piece imagines a future where anyone with an idea can create and test products easily, speeding up innovation for everyone.
Share:The Future of Product Design in an AI-Driven World
4 min read

Join the UX Magazine community!

Stay informed with exclusive content on the intersection of UX, AI agents, and agentic automation—essential reading for future-focused professionals.

Hello!

You're officially a member of the UX Magazine Community.
We're excited to have you with us!

Thank you!

To begin viewing member content, please verify your email.

Tell us about you. Enroll in the course.

    This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and