Can Machines Help Us Answering Question 16 in Datasheets, and In Turn Reflecting on Inappropriate Content?

02/14/2022
by   Patrick Schramowski, et al.
0

Large datasets underlying much of current machine learning raise serious issues concerning inappropriate content such as offensive, insulting, threatening, or might otherwise cause anxiety. This calls for increased dataset documentation, e.g., using datasheets. They, among other topics, encourage to reflect on the composition of the datasets. So far, this documentation, however, is done manually and therefore can be tedious and error-prone, especially for large image datasets. Here we ask the arguably "circular" question of whether a machine can help us reflect on inappropriate content, answering Question 16 in Datasheets. To this end, we propose to use the information stored in pre-trained transformer models to assist us in the documentation process. Specifically, prompt-tuning based on a dataset of socio-moral values steers CLIP to identify potentially inappropriate content, therefore reducing human labor. We then document the inappropriate images found using word clouds, based on captions generated using a vision-language model. The documentations of two popular, large-scale computer vision datasets – ImageNet and OpenImages – produced this way suggest that machines can indeed help dataset creators to answer Question 16 on inappropriate image content.

READ FULL TEXT
research
10/08/2021

Inferring Offensiveness In Images From Natural Language Supervision

Probing or fine-tuning (large-scale) pre-trained models results in state...
research
08/09/2021

Do Datasets Have Politics? Disciplinary Values in Computer Vision Dataset Development

Data is a crucial component of machine learning. The field is reliant on...
research
01/27/2018

Tell-and-Answer: Towards Explainable Visual Question Answering using Attributes and Captions

Visual Question Answering (VQA) has attracted attention from both comput...
research
06/24/2020

Large image datasets: A pyrrhic win for computer vision?

In this paper we investigate problematic practices and consequences of l...
research
08/16/2023

Pro-Cap: Leveraging a Frozen Vision-Language Model for Hateful Meme Detection

Hateful meme detection is a challenging multimodal task that requires co...
research
01/19/2021

ArtEmis: Affective Language for Visual Art

We present a novel large-scale dataset and accompanying machine learning...
research
12/02/2021

Ownership and Creativity in Generative Models

Machine learning generated content such as image artworks, textual poems...

Please sign up or login with your details

Forgot password? Click here to reset