Statistical Methods for Auditing the Quality of Manual Content Reviews

06/12/2023
by   Xuan Yang, et al.
0

Large technology firms face the problem of moderating content on their online platforms for compliance with laws and policies. To accomplish this at the scale of billions of pieces of content per day, a combination of human and machine review are necessary to label content. Subjective judgement and human bias are of concern to both human annotated content as well as to auditors who may be employed to evaluate the quality of such annotations in conformance with law and/or policy. To address this concern, this paper presents a novel application of statistical analysis methods to identify human error and these sources of audit risk.

READ FULL TEXT
research
03/31/2021

QUEST: Queue Simulation for Content Moderation at Scale

Moderating content in social media platforms is a formidable challenge d...
research
10/18/2022

A Human-ML Collaboration Framework for Improving Video Content Reviews

We deal with the problem of localized in-video taxonomic human annotatio...
research
11/11/2022

Bandits for Online Calibration: An Application to Content Moderation on Social Media Platforms

We describe the current content moderation strategy employed by Meta to ...
research
09/16/2023

A Statistical Turing Test for Generative Models

The emergence of human-like abilities of AI systems for content generati...
research
03/18/2022

Hate speech, Censorship, and Freedom of Speech: The Changing Policies of Reddit

This paper examines the shift in focus on content policies and user atti...
research
03/14/2017

Causes of discomfort in stereoscopic content: a review

This paper reviews the causes of discomfort in viewing stereoscopic cont...
research
06/08/2020

Surveillance, Stigma Sociotechnical Design for HIV

Online dating and hookup platforms have fundamentally changed people's d...

Please sign up or login with your details

Forgot password? Click here to reset