Blacks is to Anger as Whites is to Joy? Understanding Latent Affective Bias in Large Pre-trained Neural Language Models

01/21/2023
by   Anoop Kadan, et al.
0

Groundbreaking inventions and highly significant performance improvements in deep learning based Natural Language Processing are witnessed through the development of transformer based large Pre-trained Language Models (PLMs). The wide availability of unlabeled data within human generated data deluge along with self-supervised learning strategy helps to accelerate the success of large PLMs in language generation, language understanding, etc. But at the same time, latent historical bias/unfairness in human minds towards a particular gender, race, etc., encoded unintentionally/intentionally into the corpora harms and questions the utility and efficacy of large PLMs in many real-world applications, particularly for the protected groups. In this paper, we present an extensive investigation towards understanding the existence of "Affective Bias" in large PLMs to unveil any biased association of emotions such as anger, fear, joy, etc., towards a particular gender, race or religion with respect to the downstream task of textual emotion detection. We conduct our exploration of affective bias from the very initial stage of corpus level affective bias analysis by searching for imbalanced distribution of affective words within a domain, in large scale corpora that are used to pre-train and fine-tune PLMs. Later, to quantify affective bias in model predictions, we perform an extensive set of class-based and intensity-based evaluations using various bias evaluation corpora. Our results show the existence of statistically significant affective bias in the PLM based emotion detection systems, indicating biased association of certain emotions towards a particular gender, race, and religion.

READ FULL TEXT
research
04/21/2022

Towards an Enhanced Understanding of Bias in Pre-trained Neural Language Models: A Survey with Special Emphasis on Affective Bias

The remarkable progress in Natural Language Processing (NLP) brought abo...
research
03/16/2023

MultiModal Bias: Introducing a Framework for Stereotypical Bias Assessment beyond Gender and Race in Vision Language Models

Recent breakthroughs in self supervised training have led to a new class...
research
06/07/2023

Language Models Get a Gender Makeover: Mitigating Gender Bias with Few-Shot Data Interventions

Societal biases present in pre-trained large language models are a criti...
research
09/21/2022

Bias at a Second Glance: A Deep Dive into Bias for German Educational Peer-Review Data Modeling

Natural Language Processing (NLP) has become increasingly utilized to pr...
research
10/12/2021

Deep Learning for Bias Detection: From Inception to Deployment

To create a more inclusive workplace, enterprises are actively investing...
research
11/01/2019

On the Unintended Social Bias of Training Language Generation Models with Data from Local Media

There are concerns that neural language models may preserve some of the ...
research
08/24/2023

Mind vs. Mouth: On Measuring Re-judge Inconsistency of Social Bias in Large Language Models

Recent researches indicate that Pre-trained Large Language Models (LLMs)...

Please sign up or login with your details

Forgot password? Click here to reset