Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Get exclusive access to thought-provoking articles, bonus podcast content, and cutting-edge whitepapers. Become a member of the UX Magazine community today!

Home ›› Behavioral Science ›› Hick’s Law

Hick’s Law

by Edward Beebe-Tron
4 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

h1400x765

Ever heard of the term FOMO? Well, that is exactly what your brain does when you are given a large selection of choices to choose from. Even after narrowing your choices down from a multi-page menu selection, there is still that, “fear of missing out” factor of feeling like you still made the wrong choice.

The greater the complexity or number of choices, the harder it is to make a decision.

Think about the times you visited a restaurant chain where you were handed a menu that contained 3–5 pages full of dishes to choose from. Though they probably had everything grouped according to meal type, dietary restrictions, etc. did you ever have a moment where you thought to yourself, ‘I can’t decide‘. You may have even done some sort of elimination process by choosing the things that sounded the most appetizing at the time, narrowing that list until you only limit yourself to a few options. This is normally where the decision becomes too hard to make and you have to rely on the input from the server or other’s at the table.

Examples

Pretend you don’t know what Amazon is or does. Would you be able to find that information just by looking at their homepage? There is a ton of information ranging from Whole Foods Market, playlists, video recommendations, and a navigation system that spans the entire length of the page. A user has a very difficult time understanding where to go or what to do to find any information about what Amazon is. Is it a search engine? Is it a media site? What products do they actually sell? How do I buy one?

Hick’sLaw

 

Hick’sLaw

Conclusion

Simplify, simplify, simplify! This is a phrase you will probably see me repeat over and over again.

  • Cut down on the amount of information you provide a user on each page.
  • Reduce the number of products you are showing at a given time.
  • Minimize the amount of text a user has to read. (Users don’t read, they scan.)
  • Try to only have 1 call-to-action per section.
Hick’sLaw
post authorEdward Beebe-Tron

Edward Beebe-Tron

Edward Beebe-Tron is a researcher, designer, and writer. Edward’s non-traditional background merges over 7 years of research experience for Fortune 500, nonprofit, healthcare, and Saas companies. Additionally, Edward previously established a nonprofit research organization studying the intersectionality of food waste and food insecurity throughout the city of Chicago. Edward is a food nerd and loves spending their free time researching food topics like the history of the hamburger, or scientific reasoning for having pineapple on pizza.

Tweet
Share
Post
Share
Email
Print

Related Articles

AI that always agrees? Over-alignment might be the hidden danger, reinforcing your misconceptions and draining your mind. Learn why this subtle failure mode is more harmful than you think — and how we can fix it.

Article by Bernard Fitzgerald
Introducing Over-Alignment
  • The article explores over-alignment — a failure mode where AI overly validates users’ assumptions, reinforcing false beliefs.
  • It shows how this feedback loop can cause cognitive fatigue, emotional strain, and professional harm.
  • The piece calls for AI systems to balance empathy with critical feedback to prevent these risks.
Share:Introducing Over-Alignment
4 min read

Why does AI call you brilliant — then refuse to tell you why? This article unpacks the paradox of empty praise and the silence that follows when validation really matters.

Article by Bernard Fitzgerald
The AI Praise Paradox
  • The article explores how AI often gives empty compliments instead of real support, and how design choices like that can make people trust it less.
  • It looks at the strange way AI praises fancy-sounding language but ignores real logic, which can be harmful, especially in sensitive areas like mental health.
  • The piece argues that AI needs to be more genuinely helpful and aligned with users to truly empower them.
Share:The AI Praise Paradox
4 min read

Mashed potatoes as a lifestyle brand? When AI starts generating user personas for absurd products — and we start taking them seriously — it’s time to ask if we’ve all lost the plot. This sharp, irreverent critique exposes the real risks of using LLMs as synthetic users in UX research.

Article by Saul Wyner
Have SpudGun, Will Travel: How AI’s Agreeableness Risks Undermining UX Thinking
  • The article explores the growing use of AI-generated personas in UX research and why it’s often a shortcut with serious flaws.
  • It introduces critiques that LLMs are trained to mimic structure, not judgment. When researchers use AI as a stand-in for real users, they risk mistaking coherence for credibility and fantasy for data.
  • The piece argues that AI tools in UX should be assistants, not oracles. Trusting “synthetic users” or AI-conjured feedback risks replacing real insights with confident nonsense.
Share:Have SpudGun, Will Travel: How AI’s Agreeableness Risks Undermining UX Thinking
22 min read

Join the UX Magazine community!

Stay informed with exclusive content on the intersection of UX, AI agents, and agentic automation—essential reading for future-focused professionals.

Hello!

You're officially a member of the UX Magazine Community.
We're excited to have you with us!

Thank you!

To begin viewing member content, please verify your email.

Tell us about you. Enroll in the course.

    This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and