Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Get exclusive access to thought-provoking articles, bonus podcast content, and cutting-edge whitepapers. Become a member of the UX Magazine community today!

Home ›› Behavioral Science ›› How filter bubbles confirm our biases and what we can do about it

How filter bubbles confirm our biases and what we can do about it

by Kashish Masood
4 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

FilterBias_Slider

Balancing personalized content with other sources of information

Last week my sister showed me a cute cat video on Instagram (because what else do you use social media for?). She then proceeded to scroll down her explore page and got lost in a sea of photos, videos and stories that caught her interest. Beyond initially squealing over the cute cat video (don’t judge!), this short interaction made me realize how different my own explore page looked.

Having written before on the topic of personalization, it comes as no surprise to me how our online activity is used to shape user experience. Our interactions provide personalization algorithms with data, which in turn, provide us with tailored content.

This experience made me wonder what type of information we are NOT presented with and how that influences the lens with which we view the world.

In this article, I’ll dive into how personalization can limit the type of information we’re exposed to. I’ll also explore ways in which we can achieve a balanced media diet while reaping the benefits of personalization.

Introducing filter bubbles

The concept of having a personalized space online is not new. It was first introduced as the ‘filter bubble’ by Eli Pariser in his famous TED talk back in 2011. Pariser describes the filter bubble as a private space consisting of things and ideas we like based on our interactions with the Web.

Filter bubbles exist for a practical reason. We live in a society where we are constantly bombarded with information from different channels. This does not help you with acting upon that information and making a choice. For example, when planning a vacation to Spain, information about vacation deals in other countries will not help you much. That’s where filter bubbles come in. By presenting you with information that is only relevant to your situation, it creates an environment that suits your needs.

So, what’s the problem here?

When presented with facts that only align with your view, its easy to end with a biased worldview as no alternative facts are explored. Online, this bias is more subtle as you don’t actively choose what is shown on your social media feeds and Google searches. Since the personalization algorithms are working ‘behind the scenes,’ you are less likely to realize the filter bubble you find yourself in.

For instance, when UK citizens had to cast their vote for Brexit, many older citizens voted to leave the European Union. This caught the younger generation by surprise as they were in an online bubble where the sentiment was the other way around. They were unable to consider the less visible views of older citizens.

To add on to it, consuming information this way can lead to a snowballing confirmation bias over time. This means that the digital content you are interacting with, will generate further similar content to interact with in the future. Hence, amplifying the effect of the bias.

Where does that leave us?

Since filter bubbles have a functional purpose, getting rid of them does not seem like an effective solution. At the same time, the example above demonstrates the need to expand our personal bubble with more diverse perspectives. So how do you balance personalized content with other sources of information?

1. Help users recognize when they are in a filter bubble

The first step to avoid the downsides of filter bubbles is for users to recognize when they are inside one. This could be done, for instance, by indicating that the recommended content users are viewing doesn’t present a balanced view. There are some applications that already try to tackle this issue. The newsletter Knowwhere uses AI to write unbiased new stories. Although, this removes filter bubbles completely. Gobo, a MIT Media lab project, helps users see what gets hidden on their social networks to create awareness of which personalization algorithms are being used. This informs users to what extent their content consumption is based on their specific interests.

Knowhere
2. Create multiple ways of exploring content

Ideally, users should be able to explore the content they interact with through lists other than recommendations. This can be achieved by creating multiple ways through which users can explore content on platforms and apps. For example, even though Netflix is well-known for its use of personalization algorithms, it still provides users with the possibility of searching for new content based on genres and what’s trending. Apps could even add a ‘Surprise me’ button if users want to be exposed to content outside of their filter bubble.

Netflix

The main takeaway

Filter bubbles exist for a practical reason. And because of that practicality, it would be illogical to completely remove them. But giving users the choice to view content the way they want is very important. There are moments where users are completely fine with listening to curated music playlists and yet, there are times when users want to actively explore new music without any recommendations.

Catering to both type of experiences by combining personalized content with other sources, provides a much richer and valuable experience to people.
post authorKashish Masood

Kashish Masood

Kashish is a UX researcher and strategist, based in the Netherlands. She is passionate about creating unique experiences by exploring the intersection of future trends, tech and human behaviour. 

Tweet
Share
Post
Share
Email
Print

Related Articles

AI that always agrees? Over-alignment might be the hidden danger, reinforcing your misconceptions and draining your mind. Learn why this subtle failure mode is more harmful than you think — and how we can fix it.

Article by Bernard Fitzgerald
Introducing Over-Alignment
  • The article explores over-alignment — a failure mode where AI overly validates users’ assumptions, reinforcing false beliefs.
  • It shows how this feedback loop can cause cognitive fatigue, emotional strain, and professional harm.
  • The piece calls for AI systems to balance empathy with critical feedback to prevent these risks.
Share:Introducing Over-Alignment
4 min read

Why does AI call you brilliant — then refuse to tell you why? This article unpacks the paradox of empty praise and the silence that follows when validation really matters.

Article by Bernard Fitzgerald
The AI Praise Paradox
  • The article explores how AI often gives empty compliments instead of real support, and how design choices like that can make people trust it less.
  • It looks at the strange way AI praises fancy-sounding language but ignores real logic, which can be harmful, especially in sensitive areas like mental health.
  • The piece argues that AI needs to be more genuinely helpful and aligned with users to truly empower them.
Share:The AI Praise Paradox
4 min read

Mashed potatoes as a lifestyle brand? When AI starts generating user personas for absurd products — and we start taking them seriously — it’s time to ask if we’ve all lost the plot. This sharp, irreverent critique exposes the real risks of using LLMs as synthetic users in UX research.

Article by Saul Wyner
Have SpudGun, Will Travel: How AI’s Agreeableness Risks Undermining UX Thinking
  • The article explores the growing use of AI-generated personas in UX research and why it’s often a shortcut with serious flaws.
  • It introduces critiques that LLMs are trained to mimic structure, not judgment. When researchers use AI as a stand-in for real users, they risk mistaking coherence for credibility and fantasy for data.
  • The piece argues that AI tools in UX should be assistants, not oracles. Trusting “synthetic users” or AI-conjured feedback risks replacing real insights with confident nonsense.
Share:Have SpudGun, Will Travel: How AI’s Agreeableness Risks Undermining UX Thinking
22 min read

Join the UX Magazine community!

Stay informed with exclusive content on the intersection of UX, AI agents, and agentic automation—essential reading for future-focused professionals.

Hello!

You're officially a member of the UX Magazine Community.
We're excited to have you with us!

Thank you!

To begin viewing member content, please verify your email.

Tell us about you. Enroll in the course.

    This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and