"I'm sorry to hear that": finding bias in language models with a holistic descriptor dataset

05/18/2022
by   Eric Michael Smith, et al.
0

As language models grow in popularity, their biases across all possible markers of demographic identity should be measured and addressed in order to avoid perpetuating existing societal harms. Many datasets for measuring bias currently exist, but they are restricted in their coverage of demographic axes, and are commonly used with preset bias tests that presuppose which types of biases the models exhibit. In this work, we present a new, more inclusive dataset, HOLISTICBIAS, which consists of nearly 600 descriptor terms across 13 different demographic axes. HOLISTICBIAS was assembled in conversation with experts and community members with lived experience through a participatory process. We use these descriptors combinatorially in a set of bias measurement templates to produce over 450,000 unique sentence prompts, and we use these prompts to explore, identify, and reduce novel forms of bias in several generative models. We demonstrate that our dataset is highly efficacious for measuring previously unmeasurable biases in token likelihoods and generations from language models, as well as in an offensiveness classifier. We will invite additions and amendments to the dataset, and we hope it will help serve as a basis for easy-to-use and more standardized methods for evaluating bias in NLP models.

READ FULL TEXT
research
05/22/2023

This Prompt is Measuring <MASK>: Evaluating Bias Evaluation in Language Models

Bias research in NLP seeks to analyse models for social biases, thus hel...
research
09/30/2020

CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models

Pretrained language models, especially masked language models (MLMs) hav...
research
05/25/2022

Perturbation Augmentation for Fairer NLP

Unwanted and often harmful social biases are becoming ever more salient ...
research
09/30/2018

Identifying Bias in AI using Simulation

Machine learned models exhibit bias, often because the datasets used to ...
research
08/14/2019

Debiasing Personal Identities in Toxicity Classification

As Machine Learning models continue to be relied upon for making automat...
research
04/06/2023

Uncurated Image-Text Datasets: Shedding Light on Demographic Bias

The increasing tendency to collect large and uncurated datasets to train...
research
12/03/2022

Towards Robust NLG Bias Evaluation with Syntactically-diverse Prompts

We present a robust methodology for evaluating biases in natural languag...

Please sign up or login with your details

Forgot password? Click here to reset