Nonlinear Invariant Risk Minimization: A Causal Approach

02/24/2021
by   Chaochao Lu, et al.
38

Due to spurious correlations, machine learning systems often fail to generalize to environments whose distributions differ from the ones used at training time. Prior work addressing this, either explicitly or implicitly, attempted to find a data representation that has an invariant causal relationship with the target. This is done by leveraging a diverse set of training environments to reduce the effect of spurious features and build an invariant predictor. However, these methods have generalization guarantees only when both data representation and classifiers come from a linear model class. We propose Invariant Causal Representation Learning (ICRL), a learning paradigm that enables out-of-distribution (OOD) generalization in the nonlinear setting (i.e., nonlinear representations and nonlinear classifiers). It builds upon a practical and general assumption: the prior over the data representation factorizes when conditioning on the target and the environment. Based on this, we show identifiability of the data representation up to very simple transformations. We also prove that all direct causes of the target can be fully discovered, which further enables us to obtain generalization guarantees in the nonlinear setting. Extensive experiments on both synthetic and real-world datasets show that our approach significantly outperforms a variety of baseline methods. Finally, in the concluding discussion, we further explore the aforementioned assumption and propose a general view, called the Agnostic Hypothesis: there exist a set of hidden causal factors affecting both inputs and outcomes. The Agnostic Hypothesis can provide a unifying view of machine learning in terms of representation learning. More importantly, it can inspire a new direction to explore the general theory for identifying hidden causal factors, which is key to enabling the OOD generalization guarantees in machine learning.

READ FULL TEXT

page 17

page 25

research
07/05/2019

Invariant Risk Minimization

We introduce Invariant Risk Minimization (IRM), a learning paradigm to e...
research
02/11/2020

Invariant Risk Minimization Games

The standard risk minimization paradigm of machine learning is brittle w...
research
03/27/2022

Causality Inspired Representation Learning for Domain Generalization

Domain generalization (DG) is essentially an out-of-distribution problem...
research
11/03/2022

FedGen: Generalizable Federated Learning

Existing federated learning models that follow the standard risk minimiz...
research
05/31/2022

Differentiable Invariant Causal Discovery

Learning causal structure from observational data is a fundamental chall...
research
06/28/2021

Understanding Dynamics of Nonlinear Representation Learning and Its Application

Representations of the world environment play a crucial role in machine ...
research
04/26/2022

Causal Transportability for Visual Recognition

Visual representations underlie object recognition tasks, but they often...

Please sign up or login with your details

Forgot password? Click here to reset