Causally-motivated Shortcut Removal Using Auxiliary Labels

05/13/2021
by   Maggie Makar, et al.
0

Robustness to certain distribution shifts is a key requirement in many ML applications. Often, relevant distribution shifts can be formulated in terms of interventions on the process that generates the input data. Here, we consider the problem of learning a predictor whose risk across such shifts is invariant. A key challenge to learning such risk-invariant predictors is shortcut learning, or the tendency for models to rely on spurious correlations in practice, even when a predictor based on shift-invariant features could achieve optimal i.i.d generalization in principle. We propose a flexible, causally-motivated approach to address this challenge. Specifically, we propose a regularization scheme that makes use of auxiliary labels for potential shortcut features, which are often available at training time. Drawing on the causal structure of the problem, we enforce a conditional independence between the representation used to predict the main label and the auxiliary labels. We show both theoretically and empirically that this causally-motivated regularization scheme yields robust predictors that generalize well both in-distribution and under distribution shifts, and does so with better sample efficiency than standard regularization or weighting approaches.

READ FULL TEXT
01/02/2022

Improving Out-of-Distribution Robustness via Selective Augmentation

Machine learning algorithms typically assume that training and test exam...
10/19/2021

Learning Representations that Support Robust Transfer of Predictors

Ensuring generalization to unseen environments remains a challenge. Doma...
06/29/2022

When Does Group Invariant Learning Survive Spurious Correlations?

By inferring latent groups in the training data, recent works introduce ...
07/06/2020

Estimating Generalization under Distribution Shifts via Domain-Invariant Representations

When machine learning models are deployed on a test distribution differe...
06/15/2022

Modeling the Data-Generating Process is Necessary for Out-of-Distribution Generalization

Real-world data collected from multiple domains can have multiple, disti...
03/22/2020

Invariant Rationalization

Selective rationalization improves neural network interpretability by id...
07/20/2022

Learning Counterfactually Invariant Predictors

We propose a method to learn predictors that are invariant under counter...