Class Is Invariant to Context and Vice Versa: On Learning Invariance for Out-Of-Distribution Generalization

08/06/2022
by   Jiaxin Qi, et al.
0

Out-Of-Distribution generalization (OOD) is all about learning invariance against environmental changes. If the context in every class is evenly distributed, OOD would be trivial because the context can be easily removed due to an underlying principle: class is invariant to context. However, collecting such a balanced dataset is impractical. Learning on imbalanced data makes the model bias to context and thus hurts OOD. Therefore, the key to OOD is context balance. We argue that the widely adopted assumption in prior work, the context bias can be directly annotated or estimated from biased class prediction, renders the context incomplete or even incorrect. In contrast, we point out the everoverlooked other side of the above principle: context is also invariant to class, which motivates us to consider the classes (which are already labeled) as the varying environments to resolve context bias (without context labels). We implement this idea by minimizing the contrastive loss of intra-class sample similarity while assuring this similarity to be invariant across all classes. On benchmarks with various context biases and domain gaps, we show that a simple re-weighting based classifier equipped with our context estimation achieves state-of-the-art performance. We provide the theoretical justifications in Appendix and codes on https://github.com/simpleshinobu/IRMCon.

READ FULL TEXT
research
07/25/2022

Equivariance and Invariance Inductive Bias for Learning from Insufficient Data

We are interested in learning robust models from insufficient data, with...
research
07/19/2022

Invariant Feature Learning for Generalized Long-Tailed Classification

Existing long-tailed classification (LT) methods only focus on tackling ...
research
07/27/2022

Identifying Hard Noise in Long-Tailed Sample Distribution

Conventional de-noising methods rely on the assumption that all samples ...
research
06/12/2020

Domain Generalization using Causal Matching

Learning invariant representations has been proposed as a key technique ...
research
09/26/2022

Generalized Parametric Contrastive Learning

In this paper, we propose the Generalized Parametric Contrastive Learnin...
research
10/11/2022

C-Mixup: Improving Generalization in Regression

Improving the generalization of deep networks is an important open chall...
research
07/24/2021

Linear unit-tests for invariance discovery

There is an increasing interest in algorithms to learn invariant correla...

Please sign up or login with your details

Forgot password? Click here to reset