DeepAI AI Chat
Log In Sign Up

Parameter Efficient Diff Pruning for Bias Mitigation

by   Lukas Hauzenberger, et al.

In recent years language models have achieved state of the art performance on a wide variety of natural language processing tasks. As these models are continuously growing in size it becomes increasingly important to explore methods to make them more storage efficient. At the same time their increase cognitive abilities increase the danger that societal bias existing in datasets are implicitly encoded in the model weights. We propose an architecture which deals with these two challenges at the same time using two techniques: DiffPruning and Adverserial Training. The result is a modular architecture which extends the original DiffPurning setup with and additional sparse subnetwork applied as a mask to diminish the effects of a predefined protected attribute at inference time.


page 1

page 2

page 3

page 4


PDP: Parameter-free Differentiable Pruning is All You Need

DNN pruning is a popular way to reduce the size of a model, improve the ...

Structured Pruning of Large Language Models

Large language models have recently achieved state of the art performanc...

Shortcut Learning of Large Language Models in Natural Language Understanding: A Survey

Large language models (LLMs) have achieved state-of-the-art performance ...

Debiasing Masks: A New Framework for Shortcut Mitigation in NLU

Debiasing language models from unwanted behaviors in Natural Language Un...

Parameter-efficient Modularised Bias Mitigation via AdapterFusion

Large pre-trained language models contain societal biases and carry alon...

ChatGPT as a Text Simplification Tool to Remove Bias

The presence of specific linguistic signals particular to a certain sub-...