Quantifying Gender Bias Towards Politicians in Cross-Lingual Language Models

04/15/2021
by   Karolina Stańczak, et al.
0

While the prevalence of large pre-trained language models has led to significant improvements in the performance of NLP systems, recent research has demonstrated that these models inherit societal biases extant in natural language. In this paper, we explore a simple method to probe pre-trained language models for gender bias, which we use to effect a multi-lingual study of gender bias towards politicians. We construct a dataset of 250k politicians from most countries in the world and quantify adjective and verb usage around those politicians' names as a function of their gender. We conduct our study in 7 languages across 6 different language modeling architectures. Our results demonstrate that stance towards politicians in pre-trained language models is highly dependent on the language used. Finally, contrary to previous findings, our study suggests that larger language models do not tend to be significantly more gender-biased than smaller ones.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/12/2023

Measuring Gender Bias in West Slavic Language Models

Pre-trained language models have been known to perpetuate biases from th...
research
09/18/2023

Evaluating Gender Bias of Pre-trained Language Models in Natural Language Inference by Considering All Labels

Discriminatory social biases, including gender biases, have been found i...
research
04/18/2021

Worst of Both Worlds: Biases Compound in Pre-trained Vision-and-Language Models

Numerous works have analyzed biases in vision and pre-trained language m...
research
07/10/2022

FairDistillation: Mitigating Stereotyping in Language Models

Large pre-trained language models are successfully being used in a varie...
research
07/06/2022

Gender Biases and Where to Find Them: Exploring Gender Bias in Pre-Trained Transformer-based Language Models Using Movement Pruning

Language model debiasing has emerged as an important field of study in t...
research
03/26/2022

Metaphors in Pre-Trained Language Models: Probing and Generalization Across Datasets and Languages

Human languages are full of metaphorical expressions. Metaphors help peo...
research
08/08/2022

Debiased Large Language Models Still Associate Muslims with Uniquely Violent Acts

Recent work demonstrates a bias in the GPT-3 model towards generating vi...

Please sign up or login with your details

Forgot password? Click here to reset