Persistent Anti-Muslim Bias in Large Language Models

01/14/2021
by   Abubakar Abid, et al.
66

It has been observed that large-scale language models capture undesirable societal biases, e.g. relating to race and gender; yet religious bias has been relatively unexplored. We demonstrate that GPT-3, a state-of-the-art contextual language model, captures persistent Muslim-violence bias. We probe GPT-3 in various ways, including prompt completion, analogical reasoning, and story generation, to understand this anti-Muslim bias, demonstrating that it appears consistently and creatively in different uses of the model and that it is severe even compared to biases about other religious groups. For instance, "Muslim" is analogized to "terrorist" in 23 mapped to "money" in 5 needed to overcome this bias with adversarial text prompts, and find that use of the most positive 6 adjectives reduces violent completions for "Muslims" from 66

READ FULL TEXT

page 6

page 8

page 14

04/20/2020

StereoSet: Measuring stereotypical bias in pretrained language models

A stereotype is an over-generalized belief about a particular group of p...
06/23/2022

Towards WinoQueer: Developing a Benchmark for Anti-Queer Bias in Large Language Models

This paper presents exploratory work on whether and to what extent biase...
07/16/2021

Intersectional Bias in Causal Language Models

To examine whether intersectional bias can be observed in language gener...
11/15/2021

Assessing gender bias in medical and scientific masked language models with StereoSet

NLP systems use language models such as Masked Language Models (MLMs) th...
08/02/2022

How UMass-FSD Inadvertently Leverages Temporal Bias

First Story Detection describes the task of identifying new events in a ...
06/24/2021

Towards Understanding and Mitigating Social Biases in Language Models

As machine learning methods are deployed in real-world settings such as ...

Code Repositories

clip-italian

CLIP (Contrastive Language–Image Pre-training) for Italian


view repo