On the Independence of Association Bias and Empirical Fairness in Language Models

04/20/2023
by   Laura Cabello, et al.
0

The societal impact of pre-trained language models has prompted researchers to probe them for strong associations between protected attributes and value-loaded terms, from slur to prestigious job titles. Such work is said to probe models for bias or fairness-or such probes 'into representational biases' are said to be 'motivated by fairness'-suggesting an intimate connection between bias and fairness. We provide conceptual clarity by distinguishing between association biases (Caliskan et al., 2022) and empirical fairness (Shen et al., 2022) and show the two can be independent. Our main contribution, however, is showing why this should not come as a surprise. To this end, we first provide a thought experiment, showing how association bias and empirical fairness can be completely orthogonal. Next, we provide empirical evidence that there is no correlation between bias metrics and fairness metrics across the most widely used language models. Finally, we survey the sociological and psychological literature and show how this literature provides ample support for expecting these metrics to be uncorrelated.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/20/2023

A Survey on Fairness in Large Language Models

Large language models (LLMs) have shown powerful performance and develop...
research
12/14/2021

Measuring Fairness with Biased Rulers: A Survey on Quantifying Biases in Pretrained Language Models

An increasing awareness of biased patterns in natural language processin...
research
08/21/2023

Systematic Offensive Stereotyping (SOS) Bias in Language Models

Research has shown that language models (LMs) are socially biased. Howev...
research
09/27/2021

Mitigating Racial Biases in Toxic Language Detection with an Equity-Based Ensemble Framework

Recent research has demonstrated how racial biases against users who wri...
research
02/24/2021

Directional Bias Amplification

Mitigating bias in machine learning systems requires refining our unders...
research
05/25/2022

Perturbation Augmentation for Fairer NLP

Unwanted and often harmful social biases are becoming ever more salient ...
research
06/23/2022

A Disability Lens towards Biases in GPT-3 Generated Open-Ended Languages

Language models (LM) are becoming prevalent in many language-based appli...

Please sign up or login with your details

Forgot password? Click here to reset