Gender-tuning: Empowering Fine-tuning for Debiasing Pre-trained Language Models

07/20/2023
by   Somayeh Ghanbarzadeh, et al.
0

Recent studies have revealed that the widely-used Pre-trained Language Models (PLMs) propagate societal biases from the large unmoderated pre-training corpora. Existing solutions require debiasing training processes and datasets for debiasing, which are resource-intensive and costly. Furthermore, these methods hurt the PLMs' performance on downstream tasks. In this study, we propose Gender-tuning, which debiases the PLMs through fine-tuning on downstream tasks' datasets. For this aim, Gender-tuning integrates Masked Language Modeling (MLM) training objectives into fine-tuning's training process. Comprehensive experiments show that Gender-tuning outperforms the state-of-the-art baselines in terms of average gender bias scores in PLMs while improving PLMs' performance on downstream tasks solely using the downstream tasks' dataset. Also, Gender-tuning is a deployable debiasing tool for any PLM that works with original fine-tuning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/09/2021

PPT: Pre-trained Prompt Tuning for Few-shot Learning

Prompts for pre-trained language models (PLMs) have shown remarkable per...
research
08/03/2022

Efficient Fine-Tuning of Compressed Language Models with Learners

Fine-tuning BERT-based models is resource-intensive in memory, computati...
research
09/02/2023

Studying the impacts of pre-training using ChatGPT-generated text on downstream tasks

In recent times, significant advancements have been witnessed in the fie...
research
06/07/2023

Language Models Get a Gender Makeover: Mitigating Gender Bias with Few-Shot Data Interventions

Societal biases present in pre-trained large language models are a criti...
research
02/15/2023

Measuring the Instability of Fine-Tuning

Fine-tuning pre-trained language models on downstream tasks with varying...
research
05/27/2023

Fine-Tuning Language Models with Just Forward Passes

Fine-tuning language models (LMs) has yielded success on diverse downstr...
research
10/26/2022

MABEL: Attenuating Gender Bias using Textual Entailment Data

Pre-trained language models encode undesirable social biases, which are ...

Please sign up or login with your details

Forgot password? Click here to reset