DeepAI AI Chat
Log In Sign Up

FineDeb: A Debiasing Framework for Language Models

02/05/2023
by   Akash Saravanan, et al.
6

As language models are increasingly included in human-facing machine learning tools, bias against demographic subgroups has gained attention. We propose FineDeb, a two-phase debiasing framework for language models that starts with contextual debiasing of embeddings learned by pretrained language models. The model is then fine-tuned on a language modeling objective. Our results show that FineDeb offers stronger debiasing in comparison to other methods which often result in models as biased as the original language model. Our framework is generalizable for demographics with multiple classes, and we demonstrate its effectiveness through extensive experiments and comparisons with state of the art techniques. We release our code and data on GitHub.

READ FULL TEXT

page 1

page 2

page 3

page 4

01/18/2018

Fine-tuned Language Models for Text Classification

Transfer learning has revolutionized computer vision, but existing appro...
10/16/2021

Invariant Language Modeling

Modern pretrained language models are critical components of NLP pipelin...
04/28/2020

SCELMo: Source Code Embeddings from Language Models

Continuous embeddings of tokens in computer programs have been used to s...
10/29/2018

Language Modeling with Sparse Product of Sememe Experts

Most language modeling methods rely on large-scale data to statistically...
08/08/2022

Debiased Large Language Models Still Associate Muslims with Uniquely Violent Acts

Recent work demonstrates a bias in the GPT-3 model towards generating vi...
09/14/2022

Out of One, Many: Using Language Models to Simulate Human Samples

We propose and explore the possibility that language models can be studi...
03/15/2022

Do Language Models Plagiarize?

Past literature has illustrated that language models do not fully unders...