Faculty News & Research
Back to Listing
Print

New Study on Reddit Explores How Political Bias in Content Moderation Feeds Echo Chambers

Image
Professor Justin Huang discusses new research on Reddit

Although public attention has led to corporate and public policy changes, algorithms and their creators might not be the only driving factor behind political polarization on social media. In a new study, Justin T. Huang, assistant professor of marketing, explores how user-driven content moderation is ubiquitous and an overlooked aspect of this issue. 

Huang and his collaborators, Ross School of Business PhD graduate Jangwon Choi and University of Michigan graduate Yuqin Wan, study the popular social media site Reddit to explore how subreddit moderator biases in content removal decisions of over a hundred independent communities help create echo chambers.

The research team investigates a massive dataset of over 600 million comments from roughly 1.2 million users on Reddit. Using a novel methodology that combines archival data and quirks of the Reddit application programming interface, they can recover users' comments that were removed by subreddit moderators. Within this dataset, they identify the political leanings of both commenters and moderators and find that if commenters had different political opinions than moderators, then they were more likely to have comments removed.

With an upcoming presidential election and ethical questions surrounding censorship on social media, this study raises important considerations for industry leaders and policymakers. Huang shared his insights in the following Q&A.

What are some of the negative implications of politically biased content removal?

Our research documents political bias in user-driven content moderation, namely that comments whose political orientation is opposite to the moderators’ political orientation are more likely to be removed. This bias in content moderation creates echo chambers, which are online spaces characterized by homogeneity of opinion and insulation from opposing viewpoints. This can happen in a direct manner, as a tendency to censor one side of a political discussion will concentrate attention on the voices on the other side that get to be heard. It can also happen indirectly – users whose opinions are removed don’t receive engagement (upvotes, comments) on their content or might discover that their content has been taken down. Users don’t like that, so they disengage and stop commenting within the online space that is censoring them. We observe both of these patterns in this research.

A key negative implication of echo chambers is that they distort perceptions of political norms. We look to our peers to help form and shape their political beliefs, and being in an echo chamber can lead to a distorted view of what’s normal. In some cases, this can radicalize individuals and allow misinformation to go unchallenged. In other cases, it can lead to dismay at and reduce trust in electoral outcomes – how could Candidate A have won when everyone I spoke to supported Candidate B? Ultimately, this undermines the deliberative discourse and common understanding that are key to the proper functioning of our democracy.

Are these moderators intentionally looking to censor opposing viewpoints?

While the data can show us that a statistical bias against opposing political views exists, it cannot say anything directly about the intentions behind moderators’ actions. Research in other settings has shown that biases are often unconscious, and that could well be the case here. Subreddit moderation is a ripe environment for unconscious bias, as subreddit moderators face the Sisyphean task of enforcing the community’s often vague and ambiguous rules. In these cases, it’s very easy for biases around in-groups (my party) and out-groups (their party) to creep into and subtly influence human decision-making.

In identifying the political views of moderators and commenters, did you find that any particular political view was more likely to delete comments than others?

For regular users of Reddit, it should come as no surprise that the site is, on average, left-leaning. This is evidenced by the fact that the largest political subreddit on the website, /r/politics, is a bastion of Democratic support. It’s also borne out in our data and modeling of political opinion amongst users and moderators of the local subreddits we study. On a scale from 0-100, where 0 represents the staunchest Republican and 100 represents the staunchest Democrat, the average user in our data is a 58, and the average moderator is a 62. However, looking at these averages masks the fact that Reddit is incredibly diverse in its countless subreddit communities. Suffice it to say that biased content moderation is not limited to any one side.

Is this just an issue with Reddit, or could there be similar effects of user content moderation on other social media platforms?

While subreddit moderators are specific to Reddit, the type of user-driven content moderation we study in this research is present on all of the major social media platforms, including Facebook, TikTok, Instagram, YouTube, and X (formerly Twitter). These platforms give users ownership and moderation control over online spaces such as groups or the comment sections of content that they create, and there are practically no platform guidelines or oversight on how a user moderates. Drawing a parallel to the commercial setting of brand management, social media managers often recommend as best practice to engage in viewpoint-related censorship (remove comments from the “haters”) to create an echo chamber of positive brand opinion.

Based on your findings, what would you recommend social media companies do to foster more open discourse on their platforms?

User-driven content moderation plays a key role in combating toxicity and establishing community norms in online spaces, highlighting the challenge to platform managers in preserving its beneficial aspects while reducing the potential for abuse and echo chamber formation. Along these lines, here are a few things platforms could consider:

  1. Provide clear guidelines around what constitutes appropriate versus inappropriate reasons for content removal. While adherence to these guidelines may be imperfect, their presence would provide a firm starting point for content moderation and create a shared framework for both users and moderators. Further, educating moderators on the potential for and harms created by biased removals could lead moderators to be more judicious in their decisions.
  2. Increase the transparency of content removals by notifying users when their content is removed. Currently, most removals occur silently and without notifying the user, which undermines trust and makes it difficult for users to challenge wrongful removals. Additionally, providing public-facing data on the volume of removals could help reign in abuses through public scrutiny and community pressure on moderators.
  3. Implementing analytics and oversight could help platforms monitor the extent to which moderators exhibit political bias in their content moderation. Platforms can look to this research and take inspiration from our methodology in categorizing content and quantifying political leaning and bias. In combination with the provided guidelines, analytics can allow platforms to automatically flag and follow up with moderators who may be abusing the system. 
Featured Faculty