Toward Safer Crowdsourced Content Moderation

04/29/2018
by   Brandon Dang, et al.
0

Though detection systems have been developed to identify obscene content such as pornography and violence, artificial intelligence is simply not good enough to fully automate this task yet. Due to the need for manual verification, social media companies may hire internal reviewers, contract specialized workers from third parties, or outsource to online labor markets for the purpose of commercial content moderation. These content moderators are often fully exposed to extreme content and may suffer lasting psychological and emotional damage. In this work, we aim to alleviate this problem by investigating the following question: How can we reveal the minimum amount of information to a human reviewer such that an objectionable image can still be correctly identified? We design and conduct experiments in which blurred graphic and non-graphic images are filtered by human moderators on Amazon Mechanical Turk (AMT). We observe how obfuscation affects the moderation experience with respect to image classification accuracy, interface usability, and worker emotional well-being.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset