Examining and Combating Spurious Features under Distribution Shift

06/14/2021
by   Chunting Zhou, et al.
0

A central goal of machine learning is to learn robust representations that capture the causal relationship between inputs features and output labels. However, minimizing empirical risk over finite or biased datasets often results in models latching on to spurious correlations between the training input/output pairs that are not fundamental to the problem at hand. In this paper, we define and analyze robust and spurious representations using the information-theoretic concept of minimal sufficient statistics. We prove that even when there is only bias of the input distribution (i.e. covariate shift), models can still pick up spurious features from their training data. Group distributionally robust optimization (DRO) provides an effective tool to alleviate covariate shift by minimizing the worst-case training loss over a set of pre-defined groups. Inspired by our analysis, we demonstrate that group DRO can fail when groups do not directly account for various spurious correlations that occur in the data. To address this, we further propose to minimize the worst-case losses over a more flexible set of distributions that are defined on the joint distribution of groups and instances, instead of treating each group as a whole at optimization time. Through extensive experiments on one image and two language tasks, we show that our model is significantly more robust than comparable baselines under various partitions. Our code is available at https://github.com/violet-zct/group-conditional-DRO.

READ FULL TEXT
research
12/02/2022

AGRO: Adversarial Discovery of Error-prone groups for Robust Optimization

Models trained via empirical risk minimization (ERM) are known to rely o...
research
11/20/2019

Distributionally Robust Neural Networks for Group Shifts: On the Importance of Regularization for Worst-Case Generalization

Overparameterized neural networks can be highly accurate on average on a...
research
05/26/2021

Predict then Interpolate: A Simple Algorithm to Learn Stable Classifiers

We propose Predict then Interpolate (PI), a simple algorithm for learnin...
research
09/05/2022

Learning from a Biased Sample

The empirical risk minimization approach to data-driven decision making ...
research
11/11/2019

Hierarchically Robust Representation Learning

With the tremendous success of deep learning in visual tasks, the repres...
research
05/20/2023

Modeling the Q-Diversity in a Min-max Play Game for Robust Optimization

Models trained with empirical risk minimization (ERM) are revealed to ea...
research
07/13/2020

Robustness to Spurious Correlations via Human Annotations

The reliability of machine learning systems critically assumes that the ...

Please sign up or login with your details

Forgot password? Click here to reset