Persistent Anti-Muslim Bias in Large Language Models

by   Abubakar Abid, et al.

It has been observed that large-scale language models capture undesirable societal biases, e.g. relating to race and gender; yet religious bias has been relatively unexplored. We demonstrate that GPT-3, a state-of-the-art contextual language model, captures persistent Muslim-violence bias. We probe GPT-3 in various ways, including prompt completion, analogical reasoning, and story generation, to understand this anti-Muslim bias, demonstrating that it appears consistently and creatively in different uses of the model and that it is severe even compared to biases about other religious groups. For instance, "Muslim" is analogized to "terrorist" in 23 mapped to "money" in 5 needed to overcome this bias with adversarial text prompts, and find that use of the most positive 6 adjectives reduces violent completions for "Muslims" from 66


page 6

page 8

page 14


StereoSet: Measuring stereotypical bias in pretrained language models

A stereotype is an over-generalized belief about a particular group of p...

Towards WinoQueer: Developing a Benchmark for Anti-Queer Bias in Large Language Models

This paper presents exploratory work on whether and to what extent biase...

Intersectional Bias in Causal Language Models

To examine whether intersectional bias can be observed in language gener...

Assessing gender bias in medical and scientific masked language models with StereoSet

NLP systems use language models such as Masked Language Models (MLMs) th...

How UMass-FSD Inadvertently Leverages Temporal Bias

First Story Detection describes the task of identifying new events in a ...

Towards Understanding and Mitigating Social Biases in Language Models

As machine learning methods are deployed in real-world settings such as ...

Code Repositories


CLIP (Contrastive Language–Image Pre-training) for Italian

view repo