Freeze then Train: Towards Provable Representation Learning under Spurious Correlations and Feature Noise

10/20/2022
by   Haotian Ye, et al.
0

The existence of spurious correlations such as image backgrounds in the training environment can make empirical risk minimization (ERM) perform badly in the test environment. To address this problem, Kirichenko et al. (2022) empirically found that the core features that are causally related to the outcome can still be learned well even with the presence of spurious correlations. This opens a promising strategy to first train a feature learner rather than a classifier, and then perform linear probing (last layer retraining) in the test environment. However, a theoretical understanding of when and why this approach works is lacking. In this paper, we find that core features are only learned well when they are less noisy than spurious features, which is not necessarily true in practice. We provide both theories and experiments to support this finding and to illustrate the importance of feature noise. Moreover, we propose an algorithm called Freeze then Train (FTT), that first freezes certain salient features and then trains the rest of the features using ERM. We theoretically show that FTT preserves features that are more beneficial to test time probing. Across two commonly used real-world benchmarks, FTT outperforms ERM, JTT and CVaR-DRO, with especially substantial improvement in accuracy (by 4.8

READ FULL TEXT

page 2

page 21

research
10/20/2022

On Feature Learning in the Presence of Spurious Correlations

Deep classifiers are known to rely on spurious features x2013 patterns w...
research
01/30/2022

Provable Domain Generalization via Invariant-Feature Subspace Recovery

Domain generalization asks for models trained on a set of training envir...
research
03/28/2022

Core Risk Minimization using Salient ImageNet

Deep neural networks can be unreliable in the real world especially when...
research
05/26/2023

Controlling Learned Effects to Reduce Spurious Correlations in Text Classifiers

To address the problem of NLP classifiers learning spurious correlations...
research
05/25/2023

Which Features are Learnt by Contrastive Learning? On the Role of Simplicity Bias in Class Collapse and Feature Suppression

Contrastive learning (CL) has emerged as a powerful technique for repres...
research
09/15/2022

Explicit Tradeoffs between Adversarial and Natural Distributional Robustness

Several existing works study either adversarial or natural distributiona...
research
06/08/2023

Robust Learning with Progressive Data Expansion Against Spurious Correlation

While deep learning models have shown remarkable performance in various ...

Please sign up or login with your details

Forgot password? Click here to reset