Debiasing Methods in Natural Language Understanding Make Bias More Accessible

09/09/2021
by   Michael Mendelson, et al.
0

Model robustness to bias is often determined by the generalization on carefully designed out-of-distribution datasets. Recent debiasing methods in natural language understanding (NLU) improve performance on such datasets by pressuring models into making unbiased predictions. An underlying assumption behind such methods is that this also leads to the discovery of more robust features in the model's inner representations. We propose a general probing-based framework that allows for post-hoc interpretation of biases in language models, and use an information-theoretic approach to measure the extractability of certain biases from the model's representations. We experiment with several NLU datasets and known biases, and show that, counter-intuitively, the more a language model is pushed towards a debiased regime, the more bias is actually encoded in its inner representations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/25/2022

Shortcut Learning of Large Language Models in Natural Language Understanding: A Survey

Large language models (LLMs) have achieved state-of-the-art performance ...
research
09/05/2021

End-to-End Self-Debiasing Framework for Robust NLU Training

Existing Natural Language Understanding (NLU) models have been shown to ...
research
05/28/2023

Robust Natural Language Understanding with Residual Attention Debiasing

Natural language understanding (NLU) models often suffer from unintended...
research
06/14/2021

Mitigating Biases in Toxic Language Detection through Invariant Rationalization

Automatic detection of toxic language plays an essential role in protect...
research
02/09/2021

Statistically Profiling Biases in Natural Language Reasoning Datasets and Models

Recent work has indicated that many natural language understanding and r...
research
10/28/2022

Debiasing Masks: A New Framework for Shortcut Mitigation in NLU

Debiasing language models from unwanted behaviors in Natural Language Un...
research
11/30/2018

Modeling natural language emergence with integral transform theory and reinforcement learning

Zipf's law predicts a power-law relationship between word rank and frequ...

Please sign up or login with your details

Forgot password? Click here to reset