Generating Data to Mitigate Spurious Correlations in Natural Language Inference Datasets

03/24/2022
by   Yuxiang WU, et al.
0

Natural language processing models often exploit spurious correlations between task-independent features and labels in datasets to perform well only within the distributions they are trained on, while not generalising to different task distributions. We propose to tackle this problem by generating a debiased version of a dataset, which can then be used to train a debiased, off-the-shelf model, by simply replacing its training data. Our approach consists of 1) a method for training data generators to generate high-quality, label-consistent data samples; and 2) a filtering mechanism for removing data points that contribute to spurious correlations, measured in terms of z-statistics. We generate debiased versions of the SNLI and MNLI datasets, and we evaluate on a large suite of debiased, out-of-distribution, and adversarial test sets. Results show that models trained on our debiased datasets generalise better than those trained on the original datasets in all settings. On the majority of the datasets, our method outperforms or performs comparably to previous state-of-the-art debiasing strategies, and when combined with an orthogonal technique, product-of-experts, it improves further and outperforms previous best results of SNLI-hard and MNLI-hard.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/20/2020

Learning What Makes a Difference from Counterfactual Examples and Gradient Supervision

One of the primary challenges limiting the applicability of deep learnin...
research
06/03/2023

Stubborn Lexical Bias in Data and Models

In NLP, recent work has seen increased focus on spurious correlations be...
research
11/13/2021

Extracting and filtering paraphrases by bridging natural language inference and paraphrasing

Paraphrasing is a useful natural language processing task that can contr...
research
04/28/2020

Unnatural Language Processing: Bridging the Gap Between Synthetic and Natural Language Data

Large, human-annotated datasets are central to the development of natura...
research
09/19/2021

Towards Zero-Label Language Learning

This paper explores zero-label learning in Natural Language Processing (...
research
09/17/2023

Mitigating Shortcuts in Language Models with Soft Label Encoding

Recent research has shown that large language models rely on spurious co...
research
09/26/2019

Learning the Difference that Makes a Difference with Counterfactually-Augmented Data

Despite alarm over the reliance of machine learning systems on so-called...

Please sign up or login with your details

Forgot password? Click here to reset