Measuring Fairness with Biased Rulers: A Survey on Quantifying Biases in Pretrained Language Models

12/14/2021
by   Pieter Delobelle, et al.
0

An increasing awareness of biased patterns in natural language processing resources, like BERT, has motivated many metrics to quantify `bias' and `fairness'. But comparing the results of different metrics and the works that evaluate with such metrics remains difficult, if not outright impossible. We survey the existing literature on fairness metrics for pretrained language models and experimentally evaluate compatibility, including both biases in language models as in their downstream tasks. We do this by a mixture of traditional literature survey and correlation analysis, as well as by running empirical evaluations. We find that many metrics are not compatible and highly depend on (i) templates, (ii) attribute and target seeds and (iii) the choice of embeddings. These results indicate that fairness or bias evaluation remains challenging for contextualized language models, if not at least highly subjective. To improve future comparisons and fairness evaluations, we recommend avoiding embedding-based metrics and focusing on fairness evaluations in downstream tasks.

READ FULL TEXT
research
08/20/2023

A Survey on Fairness in Large Language Models

Large language models (LLMs) have shown powerful performance and develop...
research
10/09/2022

Quantifying Social Biases Using Templates is Unreliable

Recently, there has been an increase in efforts to understand how large ...
research
04/20/2023

On the Independence of Association Bias and Empirical Fairness in Language Models

The societal impact of pre-trained language models has prompted research...
research
03/25/2022

On the Intrinsic and Extrinsic Fairness Evaluation Metrics for Contextualized Language Representations

Multiple metrics have been introduced to measure fairness in various nat...
research
05/15/2023

From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biases Leading to Unfair NLP Models

Large language models (LMs) are pretrained on diverse data sources: news...
research
08/21/2023

Systematic Offensive Stereotyping (SOS) Bias in Language Models

Research has shown that language models (LMs) are socially biased. Howev...
research
06/08/2023

Mapping Brains with Language Models: A Survey

Over the years, many researchers have seemingly made the same observatio...

Please sign up or login with your details

Forgot password? Click here to reset