Introduction
In today’s world, we spend an increasing amount of time on digital platforms. We mostly learn new information by reading, watching, and discovering online on digital platforms such as Instagram, TikTok, YouTube, or Google Search. Since the stream of data produced by these platforms appears infinite, they have started relying on filtering systems that help manage the overwhelming amount of data. These systems learn from our behaviour and predict what we are most likely to enjoy.
In everyday life, this feels convenient. Instead of searching for everything manually, we are given content that seems relevant and familiar. However, because these systems determine what remains hidden from us, they inevitably shape how we interpret information and form opinions. This dynamic is often described as filter bubbles, a term introduced by Pariser (2011).
However, some studies suggest that filter bubbles are not universal and argue that user behaviour and multi-platform media habits can prevent bubbles from forming. Because these perspectives differ significantly, the essay will examine what personalization entails, where the risks are genuine, and where they may be overstated.
To study this phenomenon, this essay draws on review of academic literature, empirical studies, theoretical papers, and reports examining algorithmic personalization and its effects. Sources were selected through keyword searches.
This essay approaches the topic by reviewing a focused selection of research papers on algorithmic personalization, filter bubbles, and their societal impact. It also focuses on three main research questions:
- How do personalization systems work, and how they contribute to filter bubbles?
- What social and ethical consequences are there?
- What could help reduce negative effects without losing the benefits of personalization?
To answer these questions, the essay first explains what is behind algorithmic personalization. Then, it examines how filter bubbles form and shows how these processes differ across major digital platforms. The third section explains the negative consequences of personalization.
Finally, we discuss possible solutions to these problems. To sum it up, the goal of this essay is to present a balanced understanding of the filter bubbles matter, from both points of view. By examining the research from several angles, the essay tries to clarify a debate that is often oversimplified and to propose realistic ways to support a healthier and more diverse information environment.
How Algorithmic Personalization Works
First, to understand why filter bubbles even form, it is important to look at how personalization systems work. Although platforms differ in their technical design, they share a similar goal: predicting which content is most likely to keep each user engaged.
Algorithms learn from behaviour
Personalization begins with the data people generate while using a platform. Algorithms observe what users click on, how long they watch a video, which posts they like or ignore, which accounts they follow, and what they search for. Over time, these signals help the system infer patterns in individual preferences.
The effects are apparent in daily use. Watching several fitness videos on YouTube quickly results in more sports-related recommendations. Interacting with beauty content on Instagram reshapes the Explore page into a largely cosmetic and fashion-focused feed. Even search engines behave similarly. Hannák et al. (2013) found that two people entering identical queries may receive different Google results because the system takes into account past searches or location.
Machine learning in the background
Most personalisation systems rely on machine-learning models that process much more data than any human editor could manage. These models try to estimate what each user will find interesting or engaging.
YouTube functions on two-stage neural network. In the first step, the system gathers a broad set of potentially relevant videos. The second model ranks them based on predicted watch time. The result depends on both the user’s behaviour and large-scale patterns observed across millions of accounts (Covington et al., 2016).
Collaborative filtering identifies users with similar patterns and recommends what those users have previously enjoyed. Content-based analysis, on the other hand, examines the characteristics of the content itself—hashtags, text, sound, colours, or objects—and matches them with user preferences.
Some platforms adjust even more rapidly. TikTok, for example, uses reinforcement learning to respond almost instantly to micro-behaviours such as rewatching a few seconds of a clip. Klug et al. (2021) show that the app can build a surprisingly accurate impression of a user after only a short period of scrolling.
Ranking algorithms and hidden bias
Once an algorithm predicts what we might enjoy, it still must decide what to show first. This is where ranking systems come in. Ranking systems typically place posts with the highest predicted engagement at the top. That often means emotionally intense, visually striking, or sensational material appears first.
Bozdag (2015) points out that this creates a structural bias. Informational or balanced content is frequently pushed downward simply because it attracts fewer reactions.
Over time, this contributes to a highly curated version of reality. Users see only a small fraction of the information that is technically available to them and is shaped by their past behaviour. This is what makes personalization feel so seamless and convenient, but it also means that people may unknowingly miss out on diverse viewpoints or alternative perspectives. When the same types of posts keep appearing repeatedly, it becomes easier for information environments to narrow and for filter bubbles to take shape.
This leads to a central question: under what conditions do these systems produce filter bubbles?
How Filter Bubbles Form
Personalization is designed to make platforms easier and more enjoyable to use, but it can also narrow the range of information a person sees. Pariser (2011) describes a filter bubble as an environment in which algorithms repeatedly present content aligned with past preferences, reducing exposure to diverse or conflicting viewpoints. This narrowing results from several interacting factors: technological design, user behaviour, and the structure of social networks.
Reinforcement loops: getting more of the same
One of the main mechanisms behind filter bubbles is the reinforcement loop. Personalization systems rely on the idea that the things users engaged with before are probably the things they will want to see again. As a result, actions such as click, a pause, or like, send signal to the algorithm about people’s preferences, and the system adjusts recommendations in response.
Researchers have observed the reinforcement loops pattern across many platforms. Google search, for example, give results based on the user’s history (Hannák et al., 2013). TikTok’s system is even more sensitive, as little as small hesitating or watching a video twice can noticeably change the feed (Medina Serrano et al., 2020; Klug et al., 2021).
As these signals accumulate, recommendations can become increasingly repetitive. This process often begins long before users recognise that their feed has become homogeneous.
Our friends shape what we see
Technology is not the only factor. Human behaviour naturally contributes to informational narrowing. People tend to interact with others who share similar views, a pattern known as homophily. On social networks, this tendency forms clusters of like-minded users.
Del Vicario et al. (2016) found that information spreads more effectively within these ideologically similar groups, while opposing viewpoints rarely enter or gain traction. When personalization algorithms rely on the social graph, as Facebook does, homophily and algorithmic ranking reinforce. The result is an information environment that becomes partially narrowed down even before the algorithmic filtering even takes place.
We prefer information that confirms our beliefs
Selective exposure further strengthens this effect. People often prefer information that aligns with what they already believe. When users repeatedly choose agreeable content, algorithms treat these choices as strong evidence of preference and surface more of the same.
Nguyen et al. (2014) found that recommendation systems not only reflect these tendencies but also intensify them. Each interaction is interpreted as confirmation, which can gradually limit the variety of information that reaches a user.
Different Platforms, Different Bubble Dynamics
Although the basic logic of personalization is similar across platforms, the way filter bubbles develop can look quite different. Each service collects its own data, reacts to different signals, and optimises for specific goals, which means the narrowing effect varies from one platform to another.
YouTube: deep recommendation chains and topic drift
YouTube’s goal is to maximise watch time. Because recommendations depend heavily on behavioural similarity rather than on social connections, users can easily drift into narrow content categories.
Ribeiro et al. (2020) mentions topic drift, where people start with neutral videos and gradually end up watching more niche or extreme material simply because the algorithm interprets repeated engagement as preference.
TikTok: extreme speed and identity clustering
TikTok’s algorithm responds almost instantly to tiny behavioural cues: pause, scroll, like and other. Studies show that TikTok can build a surprisingly truthful profile of a user within just a few moments of interaction (Klug et al., 2021).
TikTok tends to create identity-based clusters. These often revolve around themes, mainly the aesthetic of fitness, beauty, self-improvement, mental health, or gaming. For instance, if a person reads a certain fantasy book and starts liking or seeking out videos with a similar theme, their feed will quickly fill with dragons, book recommendations, and reviews. That happens on more platforms, TikTok’s reaction time is, however, the fastest.
Because the content is short, emotional, and endlessly personalised, the speed and intensity of TikTok’s system can be particularly influential, especially for adolescents who are still forming their sense of identity.
Instagram: all about aesthetic
Instagram’s system often creates bubbles that focus on aesthetics or lifestyle themes. Chua and Chang (2016) show that seeing idealised photos again and again can influence how people feel about their own appearance, especially when they compare themselves to influencers or friends online.
Because Instagram reacts to small cues, the Explore page can become narrow after only a few interactions. Instead of showing a mix of content, Instagram tends to repeat the same styles, beauty ideals, or lifestyle trends that the user has already engaged with. As a result, people often end up in an aesthetic bubble filled with similar looks, body types, or fashion ideas, which can subtly affect how they see themselves and what they think is normal.
Facebook: social homophily and ranking
Facebook uses the combination of the social circles that users choose themselves and the platform’s ranking system. Most people already tend to stay close to friends who think and behave similarly, and Facebook’s algorithm strengthens this tendency by highlighting posts that generate strong engagement. Bakshy et al. (2015) found that even when users follow people with a range of political views, the feed still leans toward material that matches what they already believe. Because emotional political content keeps users engaged, it tends to prioritize this type of content, making already familiar perspectives appear even more.
This combination of social homophily and engagement-based ranking explains why political discussions on Facebook can feel quite one-sided and why ideological bubbles on appear more distinct than on other platforms.
Google Search: subtle personalization
Google personalizes search results in a subtle way compared social media platforms, even these small adjustments could matter to some users.
The order of search results may shift depending on searches history, clicks, or the user’s location (Hannák et al., 2013). These differences are not usually distinct, but they can become more noticeable when queries relate to politics, controversies, or commercial topics.
But… Filter Bubbles are not guaranteed to occur
Despite these mechanisms, some scholars indicate that filter bubbles do not affect all users equally. Haim, Graefe, and Brosius (2018) observed that personalised Google News feeds still shared major overlaps. Personalization influenced ordering but did not create separate informational worlds.
Another key factor is that most people use multiple sources. Dubois and Blank (2018) found that individuals rarely depend on a single platform. They move between social media, messaging apps, search engines, and offline interactions, naturally broadening their information environment.
Users also make active choices. Zuiderveen Borgesius et al. (2016) show that people frequently seek content deliberately, follow diverse accounts, and search independently. Algorithms may amplify their habits, but they do not fully determine them.
Filter bubbles tend to emerge only when several conditions align, such as highly responsive algorithms, repetitive user behaviour, and socially homogeneous networks. For many users, these conditions do not consistently appear. As a result, filter bubbles are better understood as a contextual risk rather than a universal outcome of personalization.
Possible consequences
Filter bubbles can affect more than what appears on a user’s screen. Because personalization highlights familiar viewpoints while hiding others, it can sometimes influence how people interpret social issues, how their personality forms, and how they form their beliefs.
These effects vary, but research shows that algorithmic filtering can, in some way, shape user’s perception of the world and even their own body.
Political Polarization
Political polarization refers to the growing distance between ideological groups. Scholars distinguish ideological polarization (people adopting increasingly divergent policy preferences) from affective polarization, where individuals begin to view supporters of the opposite camp with suspicion or hostility. In the United States, both forms have intensified over the last two decades, with Democrats and Republicans perceiving each other as fundamentally different and less trustworthy (Pew Research Center, 2014).
Within this context, filter bubbles act as accelerators. They do not create political division, but they strengthen the existing believes by narrowing exposure to differing viewpoints. Algorithms simply learn what users click on, and with enough time, they construct information environments that reinforce familiar narratives while filtering out competing ones (Pariser, 2011; Dubois & Blank, 2018).
In the United States, this dynamic is especially visible because of the clear two-party structure. Content of a user who frequently interacts with liberal beliefs (climate change, social justice, or reproductive rights), is shaped around those themes. Meanwhile, a conservative user engaging with content about border security, inflation, or gun rights, and such themes, has very different feed. Even when both search for the same topic, such as immigration, the explanations, tone, and suggested solutions diverge. Over time, each group comes to view its own position as common sense and the opposing one as unreasonable or detached from reality.
A comparable, though more fragmented, version of this pattern appears also in Czechia. The political landscape includes many parties, which makes polarization less binary but still algorithmically reinforced. Someone who follows for example Piráti, or environmental NGOs tends to encounter stories about transparency, climate responsibility, and social equality. Another user who engages with SPD or ANO, will more likely see posts about migration, inflation, geopolitical uncertainty, or distrust in EU institutions.
The problem is not that citizens disagree—disagreement is normal in any democracy—but that personalization gradually reduces the overlap in what people see, making mutual understanding feel more distant than it truly is.
Misinformation Spread
Filter bubbles can also support the spread of misinformation. If people repeatedly see the same type of content and do not come across alternative explanations, it becomes harder to judge what is true and what is not.
Research by Del Vicario et al. (2016) shows that misinformation spreads most effectively in connected, like-minded online communities. In these spaces, users interact with people who share similar beliefs, so false information can face very little disagreement. Once a misleading claim enters such a network, it is reposted and spread.
Personalization does not invent misinformation; it only creates a convenient environment for that information to spread. When feeds lack diverse viewpoints, even obviously questionable claims can appear trustworthy, and users may never encounter the evidence needed to reconsider them.
Reduced Critical Thinking and Passive Consumption
Another consequence of heavy personalization is how people approach information. Before the rise of personalization systems, people were forced to actively seek information to learn something, which naturally formed critical thinking. But nowadays, most of the content is pre-selected by algorithms, users do not feel the need to actively seek for more information. They just passively consume content chosen by the algorithm.
Evidence from the Health Information Filter Bubble Study (2016) shows that personalised search results can narrow the range of explanations users consider. When algorithms repeatedly provide similar answers, people feel confident that they understand a topic, even if they have not seen alternative interpretations. Over time, this reliance on algorithmic curation weakens critical thinking.
Youth and Identity Formation
Adolescents, being the most active on social media, are especially sensitive to the effects of personalised feeds. Adolescence is a crucial stage in which identity and self-confidence are still developing, and platforms like TikTok, Instagram, and YouTube react to their behaviour with remarkable speed. Within a short time, the algorithm begins shaping what teenagers see, how they compare themselves to others, and which communities they feel they belong to.
Klug et al. (2021) show that TikTok can channel young users into narrow, identity-based content clusters like beauty, dieting or fitness, after only a few minutes of interaction. The platform reinforces it by repeating similar videos, sounds, and creators. For many teenagers, these loops feel natural rather than engineered, which makes the influence even harder to notice.
Young audiences typically have limited media-literacy skills, and they may not understand that algorithms filter out and prioritise information.
As a result, personalised feeds can have a stronger effect on their self-image, mood, and social expectations, making teenagers at-risk groups.
Ethical Concerns
There are several important ethical issues concerning the personalization systems. First concern is transparency. Many users do not know why certain posts appear in their feed or what was filtered out. Helberger (2019) argues that when recommendation systems operate without explanations, users lose the understanding of the processes shaping their online experience.
There are also issues related to platform incentives. Because engagement generates advertising revenue, algorithms often prioritise emotionally charged, sensational, or provocative content (Bozdag, 2015). This does not necessarily reflect what is useful for users, it reflects what keeps them scrolling. As a result, people’s attention can be guided in subtle ways that benefit the platform more than the user.
A small number of technology companies now oversee much of the information flowing online. While users see only the curated feed presented to them, platforms control the algorithms and the decisions about what becomes visible or invisible. This imbalance affects individual autonomy of users.
Solutions and Interventions
Although filter bubbles can limit what people see online, they do not have to be a thread. So, instead of removing personalization entirely, something that would make digital platforms unusable, the goal is to design and use these systems in ways that encourage diversity, transparency, and user control.
Diversity-Aware Algorithm Design
Several researchers suggest that the most effective changes must come from the platforms themselves. One direction involves designing recommender systems that include elements of novelty or “serendipity.” Instead of showing only the most predictable content, these systems intentionally weave in posts or videos that fall slightly outside the user’s usual preferences.
The idea is not new. Streaming services like Spotify have long used “discovery” playlists that mix familiar tracks with new genres. Studies such as Lv et al. (2024) show that people are generally open to these small surprises and that they can broaden the range of content they encounter without reducing satisfaction. YouTube’s Explore page follows a similar logic by offering categories unrelated to the user’s history, making it possible to step outside the usual feed with one click.
Increasing Transparency
One recurring problem is that people often have very little insight of how recommendation systems decide what appears on their screens. Helberger (2019) notes that this this lack of transparency makes it difficult for users to recognise when their view of the world has narrowed. Some platforms have begun addressing this, for example, TikTok’s “Why this video?” option gives a basic explanation of why it was recommended.
Better approach would be letting users see what content the platform has assigned to them, which behaviours influenced recommendations, and which topics are currently being deprioritised.
More Control for Users
Transparency becomes valuable only when users have options to act on it. Therefore, there should be available some simple tool, that helps users operate and navigate through their feed. This way they could consume more diverse content and not only mindlessly scroll through the recommended content. This software adjustment would prevent some negative consequences, such as reduced critical thinking and the creation of filter bubbles themselves.
But there are also steps users can take while besides waiting for the system fixes, which could also take a very long time. Users can take steps outside the platform to prevent the filter bubbles. Clearing browsing history, removing stored cookies, or logging out of accounts before searching can reduce personalization. Some people maintain two separate browser profiles, one logged in, one anonymous, to prevent algorithms from building behavioural profiles. There has been rise of privacy-focused browsers and search engines like Firefox’s private mode, DuckDuckGo, or Brave minimise data collection by not tracking your searches, which reduces the strength of personalization signals.
External Oversight and Platform Accountability
Companies have full control over the algorithms that rank content, as well as the user data these systems depend on. Because of this, relying solely on voluntary transparency is unlikely to give users a complete picture of how these systems work.
In recent years, there has been some attempts to fill this gap. An example is the EU’s Digital Services Act (DSA), which now obliges major platforms to explain the basics of how their recommender systems function, give researchers access to key data, and offer users the option to switch to a non-personalised feed. These measures ensure that platforms are accountable for the influence their algorithms have on public information spaces.
Since the information provided by algorithms may quickly become threatening or harmful, regular audits can help reveal if an algorithm unintentionally boosts extreme content, sidelines certain groups, or encourages harmful reinforcement loops .
Regulation, then, works as a necessary counterpart to technical improvements. While engineers can design more balanced algorithms, external oversight ensures that platforms cannot quietly steer information flows without scrutiny.
Since filter bubbles emerge from a mix of algorithmic processes and user behaviour, no single solution can remove them completely. But combining several approaches, like clearer explanations of how feeds work, stronger user controls, more diversity-aware algorithm design, independent oversight, and better media literacy can reduce the risk of narrowing without entirely sacrificing personalized content.
Together, these measures point toward a healthier online environment: one where personalization still helps people find relevant information but no longer limits them to a narrow slice of what the internet has to offer.
Conclusion
Filter bubbles can influence the way people browse, think, and form opinions online, but the research shows that their effects are more nuanced than they are often portrayed. Behind the scenes of filter bubbles creation stand personalization systems, which help manage data shown to user and form their personalized feed, this includes hidden mechanisms such as reinforcement loops, homophily, and selective exposure that can contribute to the narrowing information space. However, they do not operate uniformly across all platforms or users.
For many people, everyday habits such as consuming information from several sources, talking to others offline, or simply having varied interests, introduce enough diversity to counteract strong bubble effects. At the same time, the risks should not be dismissed. Political debates can become more polarised, misinformation spreads more easily in closed communities, and younger users’ mental health and development may be affected.
The solutions discussed in this essay suggest that it is possible to keep the advantages of personalization while reducing its downsides. Greater transparency, more accessible user controls, thoughtful algorithm design, external oversight, and effort made by users. None of these steps alone can burst filter bubbles entirely, but together they can help create healthier digital environments where personalization improves our experience without quietly limiting the perspectives we encounter.
In the end, understanding how these systems shape our information environment is the first step toward using them more consciously. As long as individuals, designers, and policymakers recognise their shared responsibility, it is realistic to aim for online environments that support both relevance and diversity.
References
Bakshy, E., Messing, S., & Adamic, L. A. (2015). Exposure to ideologically diverse news and opinion on Facebook. Science, 348(6239), 1130–1132. https://doi.org/10.1126/science.aaa1160
Bozdag, E. (2013). Bias in algorithmic filtering and personalization. Ethics and Information Technology, 15(3), 209–227. https://doi.org/10.1007/s10676-013-9321-6
Chua, T. H. H., & Chang, L. (2016). Follow me and like my beautiful selfies: Singapore teenage girls’ engagement in self-presentation and peer comparison on social media. Computers in Human Behavior, 55(Part A), 190–197. https://doi.org/10.1016/j.chb.2015.09.011
Covington, P., Adams, J., & Sargin, E. (2016). Deep neural networks for YouTube recommendations. In Proceedings of the 10th ACM Conference on Recommender Systems (pp. 191–198). Association for Computing Machinery. https://doi.org/10.1145/2959100.2959190
Del Vicario, M., Bessi, A., Zollo, F., Petroni, F., Scala, A., Caldarelli, G., Stanley, H. E., & Quattrociocchi, W. (2016). The spreading of misinformation online. Proceedings of the National Academy of Sciences, 113(3), 554–559. https://doi.org/10.1073/pnas.1517441113
Dubois, E., & Blank, G. (2018). The echo chamber is overstated: The moderating effect of political interest and diverse media. Information, Communication & Society, 21(5), 729–745. https://doi.org/10.1080/1369118X.2018.1428656
Haim, M., Graefe, A., & Brosius, H.-B. (2018). Burst of the filter bubble? Effects of personalization on the diversity of Google News. Digital Journalism, 6(3), 330–343. https://doi.org/10.1080/21670811.2017.1338145
Hannák, A., Sapiezynski, P., Kakhki, A. M., Krishnamurthy, B., Lazer, D., Mislove, A., & Wilson, C. (2013). Measuring personalization of web search. In Proceedings of the 22nd International Conference on World Wide Web (pp. 527–538). Association for Computing Machinery. https://doi.org/10.1145/2488388.2488435
Helberger, N. (2019). On the democratic role of news recommenders. Digital Journalism, 7(8), 993–1012. https://doi.org/10.1080/21670811.2019.1623700
Holone, H. (2016). The filter bubble and its effect on online personal health information. Croatian Medical Journal, 57(3), 298–301. https://doi.org/10.3325/cmj.2016.57.298
Klug, D., Qin, Y., Evans, M., & Kaufman, G. (2021). Trick and please: A mixed-method study on user assumptions about the TikTok algorithm. In Proceedings of the 13th ACM Web Science Conference 2021 (pp. 84–92). Association for Computing Machinery. https://doi.org/10.1145/3447535.3462512
Pariser, E. (2011). The filter bubble: What the Internet is hiding from you. Penguin.
Pew Research Center. (2014). Political polarization in the American public. Pew Research Center. https://www.pewresearch.org/politics/2014/06/12/political-polarization-in-the-american-public/ Pew Research Center
Ribeiro, M. H., Ottoni, R., West, R., Almeida, V. A. F., & Meira, W., Jr. (2020). Auditing radicalization pathways on YouTube. In FAT ’20: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency* (pp. 131–141). Association for Computing Machinery.
Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a single market for digital services and amending Directive 2000/31/EC (Digital Services Act). (2022). Official Journal of the European Union, L 277, 1–102. EUR-Lex+1 Zuiderveen Borgesius, F. J., Trilling, D., Möller, J., Bodó, B., de Vreese, C. H., & Helberger, N. (2016). Should we worry about filter bubbles? Internet Policy Review, 5(1). https://doi.org/10.14763/2016.1.401

