ADEPT: A DEbiasing PrompT Framework

11/10/2022
by   Ke Yang, et al.
0

Several works have proven that finetuning is an applicable approach for debiasing contextualized word embeddings. Similarly, discrete prompts with semantic meanings have shown to be effective in debiasing tasks. With unfixed mathematical representation at the token level, continuous prompts usually surpass discrete ones at providing a pre-trained language model (PLM) with additional task-specific information. Despite this, relatively few efforts have been made to debias PLMs by prompt tuning with continuous prompts compared to its discrete counterpart. Furthermore, for most debiasing methods that alter a PLM's original parameters, a major problem is the need to not only decrease the bias in the PLM but also to ensure that the PLM does not lose its representation ability. Finetuning methods typically have a hard time maintaining this balance, as they tend to violently remove meanings of attribute words. In this paper, we propose ADEPT, a method to debias PLMs using prompt tuning while maintaining the delicate balance between removing biases and ensuring representation ability. To achieve this, we propose a new training criterion inspired by manifold learning and equip it with an explicit debiasing term to optimize prompt tuning. In addition, we conduct several experiments with regard to the reliability, quality, and quantity of a previously proposed attribute training corpus in order to obtain a clearer prototype of a certain attribute, which indicates the attribute's position and relative distances to other words on the manifold. We evaluate ADEPT on several widely acknowledged debiasing benchmarks and downstream tasks, and find that it achieves competitive results while maintaining (and in some cases even improving) the PLM's representation ability. We further visualize words' correlation before and after debiasing a PLM, and give some possible explanations for the visible effects.

READ FULL TEXT
research
06/07/2022

DynaMaR: Dynamic Prompt with Mask Token Representation

Recent research has shown that large language models pretrained using un...
research
07/21/2021

Using Adversarial Debiasing to Remove Bias from Word Embeddings

Word Embeddings have been shown to contain the societal biases present i...
research
10/14/2022

Watermarking Pre-trained Language Models with Backdooring

Large pre-trained language models (PLMs) have proven to be a crucial com...
research
10/26/2022

FairCLIP: Social Bias Elimination based on Attribute Prototype Learning and Representation Neutralization

The Vision-Language Pre-training (VLP) models like CLIP have gained popu...
research
09/28/2021

Marked Attribute Bias in Natural Language Inference

Reporting and providing test sets for harmful bias in NLP applications i...
research
02/18/2019

CBOW Is Not All You Need: Combining CBOW with the Compositional Matrix Space Model

Continuous Bag of Words (CBOW) is a powerful text embedding method. Due ...
research
11/20/2022

Conceptor-Aided Debiasing of Contextualized Embeddings

Pre-trained language models reflect the inherent social biases of their ...

Please sign up or login with your details

Forgot password? Click here to reset