Towards Debiasing NLU Models from Unknown Biases

09/25/2020
by   Prasetya Ajie Utama, et al.
0

NLU models often exploit biases to achieve high dataset-specific performance without properly learning the intended task. Recently proposed debiasing methods are shown to be effective in mitigating this tendency. However, these methods rely on a major assumption that the types of bias are known a-priori, which limits their application to many NLU tasks and datasets. In this work, we present the first step to bridge this gap by introducing a self-debiasing framework that prevents models from mainly utilizing biases without knowing them in advance. The proposed framework is general and complementary to the existing debiasing methods. We show that it allows these existing methods to retain the improvement on the challenge datasets (i.e., sets of examples designed to expose models' reliance on biases) without specifically targeting certain biases. Furthermore, the evaluation suggests that applying the framework results in improved overall robustness. We include the code in the supplementary material.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/30/2023

Fighting Bias with Bias: Promoting Model Robustness by Amplifying Dataset Biases

NLP models often rely on superficial cues known as dataset biases to ach...
research
05/10/2020

Towards Robustifying NLI Models Against Lexical Dataset Biases

While deep learning models are making fast progress on the task of Natur...
research
10/07/2020

Improving QA Generalization by Concurrent Modeling of Multiple Biases

Existing NLP datasets contain various biases that models can easily expl...
research
10/23/2020

Improving Robustness by Augmenting Training Sentences with Predicate-Argument Structures

Existing NLP datasets contain various biases, and models tend to quickly...
research
05/28/2023

Mitigating Label Biases for In-context Learning

Various design settings for in-context learning (ICL), such as the choic...
research
05/01/2020

Mind the Trade-off: Debiasing NLU Models without Degrading the In-distribution Performance

Models for natural language understanding (NLU) tasks often rely on the ...
research
08/19/2023

Partition-and-Debias: Agnostic Biases Mitigation via A Mixture of Biases-Specific Experts

Bias mitigation in image classification has been widely researched, and ...

Please sign up or login with your details

Forgot password? Click here to reset