An Analysis of Social Biases Present in BERT Variants Across Multiple Languages

11/25/2022
by   Aristides Milios, et al.
0

Although large pre-trained language models have achieved great success in many NLP tasks, it has been shown that they reflect human biases from their pre-training corpora. This bias may lead to undesirable outcomes when these models are applied in real-world settings. In this paper, we investigate the bias present in monolingual BERT models across a diverse set of languages (English, Greek, and Persian). While recent research has mostly focused on gender-related biases, we analyze religious and ethnic biases as well and propose a template-based method to measure any kind of bias, based on sentence pseudo-likelihood, that can handle morphologically complex languages with gender-based adjective declensions. We analyze each monolingual model via this method and visualize cultural similarities and differences across different dimensions of bias. Ultimately, we conclude that current methods of probing for bias are highly language-dependent, necessitating cultural insights regarding the unique ways bias is expressed in each language and culture (e.g. through coded language, synecdoche, and other similar linguistic concepts). We also hypothesize that higher measured social biases in the non-English BERT models correlate with user-generated content in their training.

READ FULL TEXT

page 5

page 6

page 8

page 11

research
03/10/2022

Speciesist Language and Nonhuman Animal Bias in English Masked Language Models

Various existing studies have analyzed what social biases are inherited ...
research
11/15/2021

Assessing gender bias in medical and scientific masked language models with StereoSet

NLP systems use language models such as Masked Language Models (MLMs) th...
research
09/13/2021

Mitigating Language-Dependent Ethnic Bias in BERT

BERT and other large-scale language models (LMs) contain gender and raci...
research
06/27/2023

Gender Bias in BERT – Measuring and Analysing Biases through Sentiment Rating in a Realistic Downstream Classification Task

Pretrained language models are publicly available and constantly finetun...
research
05/19/2023

Bias Beyond English: Counterfactual Tests for Bias in Sentiment Analysis in Four Languages

Sentiment analysis (SA) systems are used in many products and hundreds o...
research
11/26/2022

Gender Biases Unexpectedly Fluctuate in the Pre-training Stage of Masked Language Models

Masked language models pick up gender biases during pre-training. Such b...
research
09/16/2023

Investigating Subtler Biases in LLMs: Ageism, Beauty, Institutional, and Nationality Bias in Generative Models

LLMs are increasingly powerful and widely used to assist users in a vari...

Please sign up or login with your details

Forgot password? Click here to reset