Uncurated Image-Text Datasets: Shedding Light on Demographic Bias

04/06/2023
by   Noa Garcia, et al.
2

The increasing tendency to collect large and uncurated datasets to train vision-and-language models has raised concerns about fair representations. It is known that even small but manually annotated datasets, such as MSCOCO, are affected by societal bias. This problem, far from being solved, may be getting worse with data crawled from the Internet without much control. In addition, the lack of tools to analyze societal bias in big collections of images makes addressing the problem extremely challenging. Our first contribution is to annotate part of the Google Conceptual Captions dataset, widely used for training vision-and-language models, with four demographic and two contextual attributes. Our second contribution is to conduct a comprehensive analysis of the annotations, focusing on how different demographic groups are represented. Our last contribution lies in evaluating three prevailing vision-and-language tasks: image captioning, text-image CLIP embeddings, and text-to-image generation, showing that societal bias is a persistent problem in all of them.

READ FULL TEXT
research
09/03/2019

The Woman Worked as a Babysitter: On Biases in Language Generation

We present a systematic study of biases in natural language generation (...
research
05/25/2022

Perturbation Augmentation for Fairer NLP

Unwanted and often harmful social biases are becoming ever more salient ...
research
06/16/2021

Understanding and Evaluating Racial Biases in Image Captioning

Image captioning is an important task for benchmarking visual reasoning ...
research
05/18/2022

"I'm sorry to hear that": finding bias in language models with a holistic descriptor dataset

As language models grow in popularity, their biases across all possible ...
research
01/14/2021

Persistent Anti-Muslim Bias in Large Language Models

It has been observed that large-scale language models capture undesirabl...
research
03/22/2022

A Prompt Array Keeps the Bias Away: Debiasing Vision-Language Models with Adversarial Learning

Vision-language models can encode societal biases and stereotypes, but t...
research
09/11/2023

Challenges in Annotating Datasets to Quantify Bias in Under-represented Society

Recent advances in artificial intelligence, including the development of...

Please sign up or login with your details

Forgot password? Click here to reset