NLPositionality: Characterizing Design Biases of Datasets and Models

06/02/2023
by   Sebastin Santy, et al.
0

Design biases in NLP systems, such as performance differences for different populations, often stem from their creator's positionality, i.e., views and lived experiences shaped by identity and background. Despite the prevalence and risks of design biases, they are hard to quantify because researcher, system, and dataset positionality is often unobserved. We introduce NLPositionality, a framework for characterizing design biases and quantifying the positionality of NLP datasets and models. Our framework continuously collects annotations from a diverse pool of volunteer participants on LabintheWild, and statistically quantifies alignment with dataset labels and model predictions. We apply NLPositionality to existing datasets and models for two tasks – social acceptability and hate speech detection. To date, we have collected 16,299 annotations in over a year for 600 instances from 1,096 annotators across 87 countries. We find that datasets and models align predominantly with Western, White, college-educated, and younger populations. Additionally, certain groups, such as non-binary people and non-native English speakers, are further marginalized by datasets and models as they rank least in alignment across all tasks. Finally, we draw from prior literature to discuss how researchers can examine their own positionality and that of their datasets and models, opening the door for more inclusive NLP systems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/31/2023

CReHate: Cross-cultural Re-annotation of English Hate Speech Dataset

English datasets predominantly reflect the perspectives of certain natio...
research
05/02/2020

Social Biases in NLP Models as Barriers for Persons with Disabilities

Building equitable and inclusive NLP technologies demands consideration ...
research
10/30/2019

Toward Gender-Inclusive Coreference Resolution

Correctly resolving textual mentions of people fundamentally entails mak...
research
10/12/2021

On Releasing Annotator-Level Labels and Information in Datasets

A common practice in building NLP datasets, especially using crowd-sourc...
research
05/19/2023

SeeGULL: A Stereotype Benchmark with Broad Geo-Cultural Coverage Leveraging Generative Models

Stereotype benchmark datasets are crucial to detect and mitigate social ...
research
06/14/2021

Mitigating Biases in Toxic Language Detection through Invariant Rationalization

Automatic detection of toxic language plays an essential role in protect...
research
06/12/2023

When Do Annotator Demographics Matter? Measuring the Influence of Annotator Demographics with the POPQUORN Dataset

Annotators are not fungible. Their demographics, life experiences, and b...

Please sign up or login with your details

Forgot password? Click here to reset