Fairness in AI Systems: Mitigating gender bias from language-vision models

05/03/2023
by   Lavisha Aggarwal, et al.
0

Our society is plagued by several biases, including racial biases, caste biases, and gender bias. As a matter of fact, several years ago, most of these notions were unheard of. These biases passed through generations along with amplification have lead to scenarios where these have taken the role of expected norms by certain groups in the society. One notable example is of gender bias. Whether we talk about the political world, lifestyle or corporate world, some generic differences are observed regarding the involvement of both the groups. This differential distribution, being a part of the society at large, exhibits its presence in the recorded data as well. Machine learning is almost entirely dependent on the availability of data; and the idea of learning from data and making predictions assumes that data defines the expected behavior at large. Hence, with biased data the resulting models are corrupted with those inherent biases too; and with the current popularity of ML in products, this can result in a huge obstacle in the path of equality and justice. This work studies and attempts to alleviate gender bias issues from language vision models particularly the task of image captioning. We study the extent of the impact of gender bias in existing datasets and propose a methodology to mitigate its impact in caption based language vision models.

READ FULL TEXT

page 2

page 6

research
04/07/2023

Model-Agnostic Gender Debiased Image Captioning

Image captioning models are known to perpetuate and amplify harmful soci...
research
10/28/2020

Raw Audio for Depression Detection Can Be More Robust Against Gender Imbalance than Mel-Spectrogram Features

Depression is a large-scale mental health problem and a challenging area...
research
05/14/2020

Towards Socially Responsible AI: Cognitive Bias-Aware Multi-Objective Learning

Human society had a long history of suffering from cognitive biases lead...
research
08/14/2021

TRAPDOOR: Repurposing backdoors to detect dataset bias in machine learning-based genomic analysis

Machine Learning (ML) has achieved unprecedented performance in several ...
research
06/27/2022

Prisoners of Their Own Devices: How Models Induce Data Bias in Performative Prediction

The unparalleled ability of machine learning algorithms to learn pattern...
research
11/10/2019

Queens are Powerful too: Mitigating Gender Bias in Dialogue Generation

Models often easily learn biases present in the training data, and their...
research
08/08/2019

Oxford Handbook on AI Ethics Book Chapter on Race and Gender

From massive face-recognition-based surveillance and machine-learning-ba...

Please sign up or login with your details

Forgot password? Click here to reset