Gender and Racial Bias in Visual Question Answering Datasets

05/17/2022
by   Yusuke Hirota, et al.
15

Vision-and-language tasks have increasingly drawn more attention as a means to evaluate human-like reasoning in machine learning models. A popular task in the field is visual question answering (VQA), which aims to answer questions about images. However, VQA models have been shown to exploit language bias by learning the statistical correlations between questions and answers without looking into the image content: e.g., questions about the color of a banana are answered with yellow, even if the banana in the image is green. If societal bias (e.g., sexism, racism, ableism, etc.) is present in the training data, this problem may be causing VQA models to learn harmful stereotypes. For this reason, we investigate gender and racial bias in five VQA datasets. In our analysis, we find that the distribution of answers is highly different between questions about women and men, as well as the existence of detrimental gender-stereotypical samples. Likewise, we identify that specific race-related attributes are underrepresented, whereas potentially discriminatory samples appear in the analyzed datasets. Our findings suggest that there are dangers associated to using VQA datasets without considering and dealing with the potentially harmful stereotypes. We conclude the paper by proposing solutions to alleviate the problem before, during, and after the dataset collection process.

READ FULL TEXT

page 8

page 10

page 13

page 21

page 22

page 23

research
04/26/2017

C-VQA: A Compositional Split of the Visual Question Answering (VQA) v1.0 Dataset

Visual Question Answering (VQA) has received a lot of attention over the...
research
12/21/2020

Learning content and context with language bias for Visual Question Answering

Visual Question Answering (VQA) is a challenging multimodal task to answ...
research
03/26/2020

P ≈ NP, at least in Visual Question Answering

In recent years, progress in the Visual Question Answering (VQA) field h...
research
04/02/2021

VisQA: X-raying Vision and Language Reasoning in Transformers

Visual Question Answering systems target answering open-ended textual qu...
research
02/24/2020

On the General Value of Evidence, and Bilingual Scene-Text Visual Question Answering

Visual Question Answering (VQA) methods have made incredible progress, b...
research
01/28/2023

BinaryVQA: A Versatile Test Set to Evaluate the Out-of-Distribution Generalization of VQA Models

We introduce a new test set for visual question answering (VQA) called B...
research
03/31/2022

SimVQA: Exploring Simulated Environments for Visual Question Answering

Existing work on VQA explores data augmentation to achieve better genera...

Please sign up or login with your details

Forgot password? Click here to reset