AI wedding photos
An Instagram post by BBCnewsindia that shows the fake, AI generated photos of Zendaya’s wedding

Zendaya and Tom Holland got married recently, and a beautiful photo from their wedding got over ten million likes on Instagram. But here’s the thing: the photo is fake. It was AI-generated, and ten million people believed it was real.

Over the past several years, one thing about social media has become abundantly clear. AI-generated content is increasingly common, and it is here to stay. The question then becomes, as AI-generated content becomes ubiquitous, how should online platforms govern it so that people can still trust what they see and share?

PhD student Lan Gao, currently a third year in the AIR lab at the University of Chicago’s Department of Computer Science, is trying to answer this exact question. Mentored by Associate Professor Marshini Chetty, Gao focuses broadly on human-computer interaction (HCI) and, more specifically, is interested in the topics of AI Governance, and Online Trust and Safety.

The idea for Gao’s latest paper, Governance of AI-Generated Content: A Case Study on Social Media Platforms, grew out of the rapid rise of generative AI tools since 2022. These systems are no longer niche: people now use them for everyday communication, writing, and content creation across the web. Even before this wave, platforms were already worried about deepfakes, but as generative tools spread to the mainstream, AI-generated content is no longer just the domain of “bad actors”, but is increasingly produced by everyone.

Commentators have started using terms like “AI slop” to describe the flood of synthetic text, images, and video reshaping online communities. That shift raised urgent questions for Gao: How are the platforms that host user-generated content, where anyone can post almost anything, responding to AI-generated material? Are they labeling it, restricting it, or treating it just like everything else?

“I wanted to conduct research on how stakeholders who host those online platforms manage this kind of issue,” Gao stated. “What’s unique about social media is that any user can post anything they want, and management is critical for these kinds of platforms. From the research perspective, there are limited studies that systematically analyze how different social media platforms really do those approaches.”

To understand how major platforms approach AI-generated content, Gao began by examining the platforms’ own words. The study collected and analyzed public-facing documents such as terms of service, legal and policy pages, help center articles, and support documentation that describe how AI-generated content is governed.

Gao and her team built an automated web-scraping pipeline to assemble this dataset at scale. Starting from seed pages that mentioned generative AI or related terms, the scraper followed links and extracted additional pages that contained relevant policies and guidance. They were able to capture how dozens of social platforms talk about AI-generated content in their official materials.

From there, Gao conducted a qualitative analysis of the collected passages, identifying how platforms describe, categorize, and manage AI-generated content. The analysis revealed that roughly two-thirds of the 40 platforms studied, and 27 in total, explicitly acknowledge AI-generated content governance in their documents. The remaining 13 either do not mention it or only address it indirectly.

Across those 27 platforms, Gao identified six main approaches to AI-generated content governance. They were 1) Moderating AI-Generated Content That Violates Existing Policies, 2) Disclosing and Labeling AI-Generated Content, 3) Restricting Posting and Sharing AI-Generated Content, 4) Constraining Monetization of AI-Generated Content, 5) Controlling Output Generation and Distribution from Integrated AI Tools, and 6) Educating and Empowering Users When Interacting with AI-Generated Content. The most common approach was 1) Moderating AI-Generated Content That Violates Existing Policies: 25 social media platforms explicitly emphasize that they adhere to existing community guidelines and terms of service to manage AI-generated content.

findings from study

Early debates had largely focused on deepfakes, a small concentration of bad actors in the online space. Those problems are still important, but generative AI has moved far beyond that niche. Today, AI is woven into everyday content production: a comment drafted with a chatbot, a video script co-written with an AI assistant, a music track modified by generative tools. Gao argues that stakeholders need to tailor governance strategies to the role AI plays on each platform, rather than relying on one-size-fits-all policies.

“This kind of policy oversight is just catching up right now, and social media platforms are just applying existing content migration policies, or labeling AI content to ensure transparency,” Gao said. “By doing this study, we are creating this baseline in 2025 of how the most popular social media platforms are approaching this, so that we have the evidence to recommend stakeholders, regulators, and policymakers to do something.”

Currently, Gao plans to dig deeper into disclosure and labeling practices. She is studying how AI-generated content is actually labeled in practice on short-form video platforms like TikTok and YouTube Shorts, and how creators think about disclosure. By combining policy analysis, empirical measurement, and creator perspectives, Gao hopes that her work enables concrete recommendations for future governance. The goal is not only to understand current practices, but to help platforms and regulators design solutions that reflect the realities of AI-assisted creativity and the diversity of online spaces.

To learn more about Gao’s work in the AIR lab, check out the lab website here.

Related News

More UChicago CS stories from this research area.
collage
UChicago CS News

Incredible Showing of UChicago CS Researchers to CHI 2026

Apr 10, 2026
ai cartoon
UChicago CS News

What If AI Scientists Could Talk to Each Other?

Apr 06, 2026
person using embodied AI to open a window
UChicago CS News

When AI Meets Muscle: Context-Aware Electrical Stimulation Promises a New Way to Guide Human Movements

Apr 03, 2026
graphic
UChicago CS News

UChicago Researchers Build a Tool to Help Fix Peer Review

Apr 02, 2026
iccc team photo
UChicago CS News

UChicago CS Team Qualified for 2026 ICPC World Final Championships in Dubai

Apr 01, 2026
robot
UChicago CS News

How Chicago Robot Tutors Are Teaching SEL Effectively–Without Pretending to Be Human

Mar 19, 2026
screen grab
UChicago CS News

Could AI Help Us Be More Thoughtful Voters?

Mar 17, 2026
nano carbons
In the News

Nanodiamonds and Beyond: Designing Carbon Materials with Artificial Intelligence at Exascale

Mar 16, 2026
headshot
UChicago CS News

Michael Franklin Named Deputy Dean for Computational and Mathematical Sciences

Mar 16, 2026
UChicago CS News

AI Initiative Shares UChicago’s Vision for AI-Empowered Interdisciplinary Research

Mar 16, 2026
headshot
UChicago CS News

University of Chicago PhD Student Riki Otaki Receives MongoDB PhD Fellowship Award

Feb 26, 2026
Robert Grossman presenting
UChicago CS News

M3 Workshop Advances Federated AI for Biomedical Research

Feb 23, 2026
arrow-down-largearrow-left-largearrow-right-large-greyarrow-right-large-yellowarrow-right-largearrow-right-smallbutton-arrowclosedocumentfacebookfacet-arrow-down-whitefacet-arrow-downPage 1CheckedCheckedicon-apple-t5backgroundLayer 1icon-google-t5icon-office365-t5icon-outlook-t5backgroundLayer 1icon-outlookcom-t5backgroundLayer 1icon-yahoo-t5backgroundLayer 1internal-yellowinternalintranetlinkedinlinkoutpauseplaypresentationsearch-bluesearchshareslider-arrow-nextslider-arrow-prevtwittervideoyoutube