DeepAI AI Chat
Log In Sign Up

FineDeb: A Debiasing Framework for Language Models

by   Akash Saravanan, et al.

As language models are increasingly included in human-facing machine learning tools, bias against demographic subgroups has gained attention. We propose FineDeb, a two-phase debiasing framework for language models that starts with contextual debiasing of embeddings learned by pretrained language models. The model is then fine-tuned on a language modeling objective. Our results show that FineDeb offers stronger debiasing in comparison to other methods which often result in models as biased as the original language model. Our framework is generalizable for demographics with multiple classes, and we demonstrate its effectiveness through extensive experiments and comparisons with state of the art techniques. We release our code and data on GitHub.


page 1

page 2

page 3

page 4


Fine-tuned Language Models for Text Classification

Transfer learning has revolutionized computer vision, but existing appro...

Invariant Language Modeling

Modern pretrained language models are critical components of NLP pipelin...

SCELMo: Source Code Embeddings from Language Models

Continuous embeddings of tokens in computer programs have been used to s...

Language Modeling with Sparse Product of Sememe Experts

Most language modeling methods rely on large-scale data to statistically...

Debiased Large Language Models Still Associate Muslims with Uniquely Violent Acts

Recent work demonstrates a bias in the GPT-3 model towards generating vi...

Out of One, Many: Using Language Models to Simulate Human Samples

We propose and explore the possibility that language models can be studi...

Do Language Models Plagiarize?

Past literature has illustrated that language models do not fully unders...