Every day, millions of users scroll through content on Instagram, Tiktok, and Facebook. Each one of these platforms has their own particular methods for scanning and regulating the posted content for misinformation, which is crucial for balancing user safety and freedom of speech. However, in the United States, platforms are not subject to legal constraints regarding their content moderation methods, which leads to each platform developing their own policies, with very little consistency across the board.

Research scientist Arjun Bhagoji and 5th year Ph.D. student Brennan Schaffner, under the mentorship of Neubauer Professor Nick Feamster, Associate Professor Marshini Chetty, Assistant Professor Chenhao Tan and Professor Genevieve Lakier from the UChicago Law School, wanted to investigate the specific methods each platform utilized to moderate their content and prevent misinformation. In their recent paper titled “Community Guidelines Make this the Best Party on the Internet”: An In-Depth Study of Online Platforms’ Content Moderation Policies, which was published at CHI ‘24, Bhagoji and Schaffner (along with several undergraduate co-authors) presented the first systematic study of these content moderation policies from the 43 largest online platforms that host user generated content, with a specific focus around copyright infringement, harmful speech, and misleading content. This work was funded by the Data & Democracy research initiative, a collaboration between the Data Science Institute and Center for Effective Government at the University of Chicago.

The team focused on figuring out the mechanisms by which companies claim they moderate content, and what happens when violating content is found. A question of particular interest was if companies do explicitly acknowledge the use of automated techniques, since these take away some of the burden on human moderators.

5th year PhD student, Brennan Schaffner

In order to isolate and identify the relevant portions of a given policy, they developed a policy annotation scheme. “One of the strong contributions of this paper is that we developed a principled annotation scheme, which outlines what we considered the policies’ critical components,” Schaffner comments. “We investigated each platform to find: How is problematic content found? What happens when it’s found? Do moderated users receive explanations, punishment, and/or means of appeal? By keeping a lens on the policies’ critical components, we were able to compare across popular platforms and uncover components that sites were frequently missing, such as explaining what rights were given to moderated users or what justifications platforms used to back up their chosen policies.”

Before they could annotate the text to create the main contribution of the paper, which is a publicly available annotated dataset, Bhagoji and Schaffner had to build a custom web-scraper to obtain policy text. One of their greatest challenges was figuring out how to create a web crawler that would identify all of the content moderation-related passages on a page, as modern pages are tricky to scrape, employing anti-scraping measures such as “dynamically-loading” elements, rate-limiting, and IP blocking.

Research Scientist, Arjun Bhagoji

“When pages are dynamically loaded, the page only loads when, for example, you scroll down on the page, or if it knows that the page is being accessed by a human,” Bhagoji explained. “So if you try to access the page in an automated manner, which is what we did, the page doesn’t load and we just get some sort of error. We spent almost six months building out our scraper to make sure that we hit every single page on these sites that mention relevant content moderation policy. We had to manually verify a fair amount of information.”

Through months of scraping and manual verification, their team found significant structural and compositional variation in policies across all topics and platforms. Even beyond the well-known platforms like Facebook and X, there are many other platforms in the top 43 that are still used by hundreds of millions of users. Section 230, a law that protects social media companies from being liable for the content posted on their platforms, has led to platforms essentially regulating themselves, and resulted in their role as what Schaffner calls “the de facto regulators of the Internet”.

Bhagoji and Schaffner also worked with Professor Genevieve Lakier from the UChicago Law School on this project.

“Another reason why this work is important is because a lot of these policies have terms of service statements that are very legal-coded and not user friendly,” Schaffner added. “Part of our ability to analyze these data all together at once, which makes some of the legal pieces more accessible for researchers and users. Professor Lakier helped us with the annotation scheme, to cut through some of the legal jargon, and understand what was structurally important for our study. Since she’s a First Amendment scholar, we were also able to lay out the history of how content moderation has evolved in the United States.”

With Lakier’s help, Bhagoji and Schaffner realized that few, if any, companies made actionable claims in their content moderation policies, particularly regarding hate speech and misinformation. The latter was rarely even defined, making it difficult for platform users to determine how and why certain content was moderated or not. They found that these platforms rarely explicitly state how their content moderation policies are enforced, except in the case of copyright, where the Digital Millennium Copyright Act (DMCA) provides an explicit legal grounding.

For example, Bhagoji states, “A lot of companies claim that they use AI and other automated tools to do the moderation. However, other investigative reports have shown that these companies actually hire underpaid labor in the Global South to do this work. One of the key findings of our paper is that it is challenging to figure out who actually uses the machine learning systems, so we can figure out if they are any good by externally auditing them.”

Bhagoji has already moved on to the next phase of this project, which is an audit study that involves diving in depth into one platform’s content moderation tools. He is cross-referencing their effectiveness against their stated policies as found in this paper to study their effectiveness. He hopes that other researchers can join in the effort to include other forms of moderation and look at platforms that they haven’t touched on.

Once he finishes the last piece of his dissertation — targeting user agency and manipulative platform design — Schaffner also plans to build on this work by investigating user perceptions of and experiences with content moderation leveraging the breadth of this new dataset.

To learn more about their work, please visit the AIR lab and the NOISE lab pages.

Related News

More UChicago CS stories from this research area.
In the News

Globus Receives Multiple Honors in 2024 HPCwire Readers’ and Editors’ Choice Awards

Nov 20, 2024
In the News

Argonne Team Breaks New Ground in AI-Driven Protein Design

Nov 15, 2024
UChicago CS News

DOE Awards Fred Chong and his National Research Team $7.5M to Develop a SMART Software Stack to Control Quantum Computer Noise

Nov 12, 2024
UChicago CS News

CS/LSSG Showcases Sustainability Research and Education

Nov 11, 2024
UChicago CS News

Ph.D. Student Jibang Wu Receives the Stigler Center Ph.D. Dissertation Award for His Work Modeling the Incentive Structures of Reward and Recommendation–Based Systems

Oct 24, 2024
UChicago CS News

Rebecca Willett Receives the SIAM Activity Group on Data Science Career Prize

Oct 23, 2024
UChicago CS News

UChicago CS Researchers Shine at UIST 2024 with Papers, Posters, Workshops and Demonstrations

Oct 10, 2024
UChicago CS News

UChicago Scientists Receive Grant to Expand Global Data Management Platform, Globus

Oct 03, 2024
UChicago CS News

UChicago Researchers Demonstrate the Quantifiable Uniqueness of Former President Donald Trump’s Language Use

Sep 30, 2024
UChicago CS News

Five UChicago CS students named to Siebel Scholars class of 2025

Sep 20, 2024
UChicago CS News

NSF and Simons Foundation launch $20 million National AI Research Institute in Astronomy

Sep 18, 2024
In the News

Data Ecology: A Socio-Technical Approach to Controlling Dataflows

Sep 18, 2024
arrow-down-largearrow-left-largearrow-right-large-greyarrow-right-large-yellowarrow-right-largearrow-right-smallbutton-arrowclosedocumentfacebookfacet-arrow-down-whitefacet-arrow-downPage 1CheckedCheckedicon-apple-t5backgroundLayer 1icon-google-t5icon-office365-t5icon-outlook-t5backgroundLayer 1icon-outlookcom-t5backgroundLayer 1icon-yahoo-t5backgroundLayer 1internal-yellowinternalintranetlinkedinlinkoutpauseplaypresentationsearch-bluesearchshareslider-arrow-nextslider-arrow-prevtwittervideoyoutube