Controlling Learned Effects to Reduce Spurious Correlations in Text Classifiers

05/26/2023
by   Parikshit Bansal, et al.
0

To address the problem of NLP classifiers learning spurious correlations between training features and target labels, a common approach is to make the model's predictions invariant to these features. However, this can be counter-productive when the features have a non-zero causal effect on the target label and thus are important for prediction. Therefore, using methods from the causal inference literature, we propose an algorithm to regularize the learnt effect of the features on the model's prediction to the estimated effect of feature on label. This results in an automated augmentation method that leverages the estimated effect of a feature to appropriately change the labels for new augmented inputs. On toxicity and IMDB review datasets, the proposed algorithm minimises spurious correlations and improves the minority group (i.e., samples breaking spurious correlations) accuracy, while also improving the total accuracy compared to standard training.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/26/2022

Causal Transportability for Visual Recognition

Visual representations underlie object recognition tasks, but they often...
research
04/09/2022

Uninformative Input Features and Counterfactual Invariance: Two Perspectives on Spurious Correlations in Natural Language

Spurious correlations are a threat to the trustworthiness of natural lan...
research
06/12/2020

Learning Causal Models Online

Predictive models – learned from observational data not covering the com...
research
06/02/2021

Towards Robust Classification Model by Counterfactual and Invariant Data Generation

Despite the success of machine learning applications in science, industr...
research
09/29/2022

EiHi Net: Out-of-Distribution Generalization Paradigm

This paper develops a new EiHi net to solve the out-of-distribution (OoD...
research
10/14/2021

Causally Estimating the Sensitivity of Neural NLP Models to Spurious Features

Recent work finds modern natural language processing (NLP) models relyin...
research
10/20/2022

Freeze then Train: Towards Provable Representation Learning under Spurious Correlations and Feature Noise

The existence of spurious correlations such as image backgrounds in the ...

Please sign up or login with your details

Forgot password? Click here to reset