DeepAI AI Chat
Log In Sign Up

Parameter Efficient Diff Pruning for Bias Mitigation

05/30/2022
by   Lukas Hauzenberger, et al.
0

In recent years language models have achieved state of the art performance on a wide variety of natural language processing tasks. As these models are continuously growing in size it becomes increasingly important to explore methods to make them more storage efficient. At the same time their increase cognitive abilities increase the danger that societal bias existing in datasets are implicitly encoded in the model weights. We propose an architecture which deals with these two challenges at the same time using two techniques: DiffPruning and Adverserial Training. The result is a modular architecture which extends the original DiffPurning setup with and additional sparse subnetwork applied as a mask to diminish the effects of a predefined protected attribute at inference time.

READ FULL TEXT

page 1

page 2

page 3

page 4

05/18/2023

PDP: Parameter-free Differentiable Pruning is All You Need

DNN pruning is a popular way to reduce the size of a model, improve the ...
10/10/2019

Structured Pruning of Large Language Models

Large language models have recently achieved state of the art performanc...
08/25/2022

Shortcut Learning of Large Language Models in Natural Language Understanding: A Survey

Large language models (LLMs) have achieved state-of-the-art performance ...
10/28/2022

Debiasing Masks: A New Framework for Shortcut Mitigation in NLU

Debiasing language models from unwanted behaviors in Natural Language Un...
02/13/2023

Parameter-efficient Modularised Bias Mitigation via AdapterFusion

Large pre-trained language models contain societal biases and carry alon...
05/09/2023

ChatGPT as a Text Simplification Tool to Remove Bias

The presence of specific linguistic signals particular to a certain sub-...