MultiModal Bias: Introducing a Framework for Stereotypical Bias Assessment beyond Gender and Race in Vision Language Models

03/16/2023
by   Sepehr Janghorbani, et al.
0

Recent breakthroughs in self supervised training have led to a new class of pretrained vision language models. While there have been investigations of bias in multimodal models, they have mostly focused on gender and racial bias, giving much less attention to other relevant groups, such as minorities with regard to religion, nationality, sexual orientation, or disabilities. This is mainly due to lack of suitable benchmarks for such groups. We seek to address this gap by providing a visual and textual bias benchmark called MMBias, consisting of around 3,800 images and phrases covering 14 population subgroups. We utilize this dataset to assess bias in several prominent self supervised multimodal models, including CLIP, ALBEF, and ViLT. Our results show that these models demonstrate meaningful bias favoring certain groups. Finally, we introduce a debiasing method designed specifically for such large pre-trained models that can be applied as a post-processing step to mitigate bias, while preserving the remaining accuracy of the model.

READ FULL TEXT

page 3

page 6

research
01/21/2023

Blacks is to Anger as Whites is to Joy? Understanding Latent Affective Bias in Large Pre-trained Neural Language Models

Groundbreaking inventions and highly significant performance improvement...
research
07/18/2022

Selection Bias Induced Spurious Correlations in Large Language Models

In this work we show how large language models (LLMs) can learn statisti...
research
04/18/2021

Worst of Both Worlds: Biases Compound in Pre-trained Vision-and-Language Models

Numerous works have analyzed biases in vision and pre-trained language m...
research
08/17/2022

Deep Generative Views to Mitigate Gender Classification Bias Across Gender-Race Groups

Published studies have suggested the bias of automated face-based gender...
research
02/11/2023

Counter-GAP: Counterfactual Bias Evaluation through Gendered Ambiguous Pronouns

Bias-measuring datasets play a critical role in detecting biased behavio...
research
09/10/2023

Gender Bias in Multimodal Models: A Transnational Feminist Approach Considering Geographical Region and Culture

Deep learning based visual-linguistic multimodal models such as Contrast...
research
02/03/2023

Controlling for Stereotypes in Multimodal Language Model Evaluation

We propose a methodology and design two benchmark sets for measuring to ...

Please sign up or login with your details

Forgot password? Click here to reset