SeeGULL: A Stereotype Benchmark with Broad Geo-Cultural Coverage Leveraging Generative Models

05/19/2023
by   Akshita Jha, et al.
0

Stereotype benchmark datasets are crucial to detect and mitigate social stereotypes about groups of people in NLP models. However, existing datasets are limited in size and coverage, and are largely restricted to stereotypes prevalent in the Western society. This is especially problematic as language technologies gain hold across the globe. To address this gap, we present SeeGULL, a broad-coverage stereotype dataset, built by utilizing generative capabilities of large language models such as PaLM, and GPT-3, and leveraging a globally diverse rater pool to validate the prevalence of those stereotypes in society. SeeGULL is in English, and contains stereotypes about identity groups spanning 178 countries across 8 different geo-political regions across 6 continents, as well as state-level identities within the US and India. We also include fine-grained offensiveness scores for different stereotypes and demonstrate their global disparities. Furthermore, we include comparative annotations about the same groups by annotators living in the region vs. those that are based in North America, and demonstrate that within-region stereotypes about groups differ from those prevalent in North America. CONTENT WARNING: This paper contains stereotype examples that may be offensive.

READ FULL TEXT

page 8

page 18

research
06/02/2023

Knowledge of cultural moral norms in large language models

Moral norms vary across cultures. A recent line of work suggests that En...
research
07/20/2023

Building Socio-culturally Inclusive Stereotype Resources with Community Engagement

With rapid development and deployment of generative language models in g...
research
09/24/2022

Moral Mimicry: Large Language Models Produce Moral Rationalizations Tailored to Political Identity

Large Language Models (LLMs) have recently demonstrated impressive capab...
research
06/02/2023

NLPositionality: Characterizing Design Biases of Datasets and Models

Design biases in NLP systems, such as performance differences for differ...
research
04/13/2021

Detoxifying Language Models Risks Marginalizing Minority Voices

Language models (LMs) must be both safe and equitable to be responsibly ...
research
11/16/2022

Holistic Evaluation of Language Models

Language models (LMs) are becoming the foundation for almost all major l...

Please sign up or login with your details

Forgot password? Click here to reset