Towards Understanding and Mitigating Social Biases in Language Models

06/24/2021
by   Paul Pu Liang, et al.
16

As machine learning methods are deployed in real-world settings such as healthcare, legal systems, and social science, it is crucial to recognize how they shape social biases and stereotypes in these sensitive decision-making processes. Among such real-world deployments are large-scale pretrained language models (LMs) that can be potentially dangerous in manifesting undesirable representational biases - harmful biases resulting from stereotyping that propagate negative generalizations involving gender, race, religion, and other social constructs. As a step towards improving the fairness of LMs, we carefully define several sources of representational biases before proposing new benchmarks and metrics to measure them. With these tools, we propose steps towards mitigating social biases during text generation. Our empirical results and human evaluation demonstrate effectiveness in mitigating bias while retaining crucial contextual information for high-fidelity text generation, thereby pushing forward the performance-fairness Pareto frontier.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/27/2021

BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language Generation

Recent advances in deep learning techniques have enabled machines to gen...
research
04/30/2021

Mitigating Political Bias in Language Models Through Reinforced Calibration

Current large-scale language models can be politically biased as a resul...
research
11/08/2019

Reducing Sentiment Bias in Language Models via Counterfactual Evaluation

Recent improvements in large-scale language models have driven progress ...
research
12/20/2022

Understanding Stereotypes in Language Models: Towards Robust Measurement and Zero-Shot Debiasing

Generated texts from large pretrained language models have been shown to...
research
10/25/2019

Toward a better trade-off between performance and fairness with kernel-based distribution matching

As recent literature has demonstrated how classifiers often carry uninte...
research
09/07/2023

TIDE: Textual Identity Detection for Evaluating and Augmenting Classification and Language Models

Machine learning models can perpetuate unintended biases from unfair and...
research
05/29/2023

Transformer Language Models Handle Word Frequency in Prediction Head

Prediction head is a crucial component of Transformer language models. D...

Please sign up or login with your details

Forgot password? Click here to reset