simple but effective techniques to reduce biases

09/13/2019
by   Rabeeh Karimi Mahabadi, et al.
0

There have been several studies recently showing that strong natural language inference (NLI) models are prone to relying on unwanted dataset biases, resulting in models which fail to capture the underlying generalization, and are likely to perform poorly in real-world scenarios. Biases are identified as statistical cues or superficial heuristic correlated with certain labels that are effective for the majority of examples but fail to succeed in more challenging hard examples. In this work, we propose several learning strategies to train neural models which are more robust to such biases and transfer better to out-of-domain datasets. We first introduce an additive lightweight model which learn dataset biases. We then use its prediction to adjust the loss of the base model to reduce the biases. In other words, our methods down-weight the importance of the biased examples, and focus training on hard examples which require grounded reasoning to deduce the label. Our approaches are model agnostic and simple to implement. We experiment on large-scale natural language inference and fact-verification datasets and show that our debiased models obtain significant gain over the baselines on several challenging out-of-domain datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/09/2019

Don't Take the Premise for Granted: Mitigating Artifacts in Natural Language Inference

Natural Language Inference (NLI) datasets often contain hypothesis-only ...
research
04/16/2021

Natural Language Inference with a Human Touch: Using Human Explanations to Guide Model Attention

Natural Language Inference (NLI) models are known to learn from biases a...
research
02/28/2023

SMoA: Sparse Mixture of Adapters to Mitigate Multiple Dataset Biases

Recent studies reveal that various biases exist in different NLP tasks, ...
research
10/05/2022

GAPX: Generalized Autoregressive Paraphrase-Identification X

Paraphrase Identification is a fundamental task in Natural Language Proc...
research
05/01/2020

Mind the Trade-off: Debiasing NLU Models without Degrading the In-distribution Performance

Models for natural language understanding (NLU) tasks often rely on the ...
research
02/09/2021

Statistically Profiling Biases in Natural Language Reasoning Datasets and Models

Recent work has indicated that many natural language understanding and r...
research
10/08/2020

An Empirical Study on Model-agnostic Debiasing Strategies for Robust Natural Language Inference

The prior work on natural language inference (NLI) debiasing mainly targ...

Please sign up or login with your details

Forgot password? Click here to reset