Invariance Principle Meets Information Bottleneck for Out-of-Distribution Generalization

06/11/2021
by   Kartik Ahuja, et al.
4

The invariance principle from causality is at the heart of notable approaches such as invariant risk minimization (IRM) that seek to address out-of-distribution (OOD) generalization failures. Despite the promising theory, invariance principle-based approaches fail in common classification tasks, where invariant (causal) features capture all the information about the label. Are these failures due to the methods failing to capture the invariance? Or is the invariance principle itself insufficient? To answer these questions, we revisit the fundamental assumptions in linear regression tasks, where invariance-based approaches were shown to provably generalize OOD. In contrast to the linear regression tasks, we show that for linear classification tasks we need much stronger restrictions on the distribution shifts, or otherwise OOD generalization is impossible. Furthermore, even with appropriate restrictions on distribution shifts in place, we show that the invariance principle alone is insufficient. We prove that a form of the information bottleneck constraint along with invariance helps address key failures when invariant features capture all the information about the label and also retains the existing success when they do not. We propose an approach that incorporates both of these principles and demonstrate its effectiveness in several experiments.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/25/2022

Conditional entropy minimization principle for learning domain invariant representation features

Invariance principle-based methods, for example, Invariant Risk Minimiza...
research
02/11/2022

Invariance Principle Meets Out-of-Distribution Generalization on Graphs

Despite recent developments in using the invariance principle from causa...
research
08/16/2022

Counterfactual Supervision-based Information Bottleneck for Out-of-Distribution Generalization

Learning invariant (causal) features for out-of-distribution (OOD) gener...
research
01/28/2023

Learning Optimal Features via Partial Invariance

Learning models that are robust to test-time distribution shifts is a ke...
research
01/14/2023

Generalized Invariant Matching Property via LASSO

Learning under distribution shifts is a challenging task. One principled...
research
05/18/2022

An Invariant Matching Property for Distribution Generalization under Intervened Response

The task of distribution generalization concerns making reliable predict...
research
05/29/2022

The Missing Invariance Principle Found – the Reciprocal Twin of Invariant Risk Minimization

Machine learning models often generalize poorly to out-of-distribution (...

Please sign up or login with your details

Forgot password? Click here to reset