Bias Against 93 Stigmatized Groups in Masked Language Models and Downstream Sentiment Classification Tasks

06/08/2023
by   Katelyn X. Mei, et al.
0

The rapid deployment of artificial intelligence (AI) models demands a thorough investigation of biases and risks inherent in these models to understand their impact on individuals and society. This study extends the focus of bias evaluation in extant work by examining bias against social stigmas on a large scale. It focuses on 93 stigmatized groups in the United States, including a wide range of conditions related to disease, disability, drug use, mental illness, religion, sexuality, socioeconomic status, and other relevant factors. We investigate bias against these groups in English pre-trained Masked Language Models (MLMs) and their downstream sentiment classification tasks. To evaluate the presence of bias against 93 stigmatized conditions, we identify 29 non-stigmatized conditions to conduct a comparative analysis. Building upon a psychology scale of social rejection, the Social Distance Scale, we prompt six MLMs: RoBERTa-base, RoBERTa-large, XLNet-large, BERTweet-base, BERTweet-large, and DistilBERT. We use human annotations to analyze the predicted words from these models, with which we measure the extent of bias against stigmatized groups. When prompts include stigmatized conditions, the probability of MLMs predicting negative words is approximately 20 percent higher than when prompts have non-stigmatized conditions. In the sentiment classification tasks, when sentences include stigmatized conditions related to diseases, disability, education, and mental illness, they are more likely to be classified as negative. We also observe a strong correlation between bias in MLMs and their downstream sentiment classifiers (r =0.79). The evidence indicates that MLMs and their downstream sentiment classification tasks exhibit biases against socially stigmatized groups.

READ FULL TEXT

page 9

page 17

page 18

page 19

page 20

research
06/04/2023

Exposing Bias in Online Communities through Large-Scale Language Models

Progress in natural language generation research has been shaped by the ...
research
09/16/2023

The Impact of Debiasing on the Performance of Language Models in Downstream Tasks is Underestimated

Pre-trained language models trained on large-scale data have learned ser...
research
09/30/2020

CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models

Pretrained language models, especially masked language models (MLMs) hav...
research
10/06/2022

Debiasing isn't enough! – On the Effectiveness of Debiasing MLMs and their Social Biases in Downstream Tasks

We study the relationship between task-agnostic intrinsic and task-speci...
research
04/15/2022

Identifying and Measuring Token-Level Sentiment Bias in Pre-trained Language Models with Prompts

Due to the superior performance, large-scale pre-trained language models...
research
09/16/2023

Investigating Subtler Biases in LLMs: Ageism, Beauty, Institutional, and Nationality Bias in Generative Models

LLMs are increasingly powerful and widely used to assist users in a vari...
research
06/22/2023

On Hate Scaling Laws For Data-Swamps

`Scale the model, scale the data, scale the GPU-farms' is the reigning s...

Please sign up or login with your details

Forgot password? Click here to reset