Analyzing an Archive of 20 Years of Trust and Safety Research


The Trust and Safety Archive

As part of their “History of Trust & Safety” project, the Trust & Safety Foundation (TSF) constructed an archive of peer-reviewed articles, books, and book chapters written about various online T&S topics. The aim of this archive was to begin to organize all of the academic literature written on the T&S. The archive, constructed during 2023, contains articles published between 1974 and 2023 (the archive only has a partial record of 2023, given it was created as articles were still being published that year). The creators of this archive describe their process of finding the work and building the archive as such:

We prioritized our search in the publications and conferences for the following disciplines: Communications, Human-Computer Interaction (HCI), Law, Media Studies, and Science & Technology Studies (STS). We began our search with the following keywords/phrases: content moderation, online moderation, online harassment, online safety, online governance, cybersafety, and platform governance. We used these search terms to find relevant research across several research databases including Google Scholar, the Association for Computing Machinery Digital Library, LexisNexis, Scopus, and Web of Science. We read the resulting research and added relevant items to the Zotero archive. Additionally, leveraging the sources we read, we identified other relevant scholarship based on citations and author keywords.

This archive of 1,288 articles made up the corpus that we analyzed for this project.

A Focus on Empirical Research

As the T&S field matures, policymakers, regulators, and industry practitioners are increasingly hungry for empirical evidence to guide their work. When analyzing the T&S archive, we took a particular focus on trying to better understand the breadth of empirical work that has been conducted in the T&S literature.

As part of the Social Media Governance Initiative, we downloaded a the TSF Zotero archive as a spreadsheet including the title, abstract, a link to the article, and article metadata (authors, publisher, article type, publishing date, etc). Our postbaccalaureate fellow, Michael Bochkur Dratver, then began a systematic coding of all 1,288 articles in the archive using a codebook that was developed collaboratively among those in our lab and feedback from others interested in the project.

First, each article was assessed as to whether it was empirical or not. No additional labels were added to articles which were determined not to be empirical. Those that were empirical were then labeled according to the following categories:

  • Which platform(s), if any, was being studied?

  • Was the study design observational, quasi-experimental, an experimental lab study, or an experimental field study?

  • What were the methodologies used to collect the data used in the study?

  • What types of data were used in the study?

  • What were the different topics of the study?

  • What level of collaboration existed between the researchers and the platform or groups of platform users (like community moderators)?

Articles were read in as much detail as was necessary to assign the appropriate labels. In many cases, the abstract was used to determine whether an article was empirical or not. For articles that were empirical, it was usually necessary to read significant portions of the article, with a specific focus on the methods section. Each article was labeled only by the single annotator. A future iteration of this project may include employing multiple annotators to ensure there is agreement on applied labels.


This poster was created for and presented at the 2024 Trust and Safety Research Conference at Stanford University. Click on the image to view it full-size or download.

Team

Michael Bochkur Dratver

Postbaccalaureate Fellow

Matt Katsaros

Director of the Social Media Governance Initiative

Funding

This project was funded by a grant from the The Stavros S. Niarchos Foundation.


View the Full Dataset

We downloaded a csv of the TSF Zotero archive which contained the article title, abstract, URL, and other metadata. We imported this csv into Airtable and conducted our labeling there. Below, you’ll find the full labeled dataset where you are welcome to browse, filter, or download. This labeling was conducted using a single labeler with some categories being more rigorously applied using a small set of labels throughout the project and other categories where labels are more liberally created and applied throughout the duration of the labeling. Below, the table, you’ll find some notes that may help you interpret the label categories.

Notes on Labeling Categories:

  • Empirical: Every article in the archive contains a label for this category. If the article involved collecting and analyzing some empirical data, it was labeled “Yes” and other categories were also labeled. If the article did not involve collecting or analyzing any empirical data it was labeled “No” and no other labels were applied for that article. Meta-studies were identified using this category and, like the non-empirical articles, no additional categories were labeled. Each article only gets one label for this category.

  • Platform: Here we label the platform(s) being studied. Labels for this category were created and applied as we conducted the labeling and came across various platforms being studied in the articles. In the case the researchers were studying social media generally or were studying some fictional platform, the label “Non-Platform or Platform Agnostic” was applied. Multiple labels are applied in the case that multiple platforms were studied.

  • Experimental: This label is used to indicate the experimental design of the study with a small set of labels. These labels were developed at the start of the project and used consistently throughout the labeling. The labels are “Observational”, “Quasi-Experimental”, “Experimental (Lab)”, and “Experimental (Field)”. While less common, multiple-labels are applied for this category when a study combines different design methods.

  • Measure - Methodology: This label is used to indicate the method for collecting the empirical data. The labels were created at the start of the project and used consistently throughout the labeling process. The labels are “Interviews”, “Survey Measurement”, “Behaviors - Digital”, “Behaviors - Non-digital”, and “Other”. Each article can have more than one label for this category.

  • Measure - Type: This label is used to indicate the types of measures being used in the empirical analysis. The labels were created throughout the labeling process making this category and the resulting labels slightly less reliable and consistent.

  • Measure Topic: This label is used to indicate the various topics being studied in the article. These labels were also created throughout the labeling process. Also, many articles can span many different topics. For these reasons, the labels in this category are also less reliable and consistent.

  • Collaboration: This label is used to indicate the extent to which the researchers collaborated with either the platform or users/mods. These labels were created at the start of the project and used consistently throughout the project. The labels in this category are “Independent”, “Collaboration with Platform”, and “Collaboration with Mods or Users”

  • Source: The column furthest to the right indicates where we sourced the article. The analysis used in the poster above leverages only articles which were sourced from the TSF’s archive of trust and safety research. However, we also labeled the ~40 articles which are included in the Prosocial Design Library (all of which are empirical research papers).

Previous
Previous

Two Case Studies on Platform Design

Next
Next

SMGI Spring 2023 Convening: Beyond Moderation