Learning Robust Models Using The Principle of Independent Causal Mechanisms

10/14/2020
by   Jens Müller, et al.
0

Standard supervised learning breaks down under data distribution shift. However, the principle of independent causal mechanisms (ICM, Peters et al. (2017)) can turn this weakness into an opportunity: one can take advantage of distribution shift between different environments during training in order to obtain more robust models. We propose a new gradient-based learning framework whose objective function is derived from the ICM principle. We show theoretically and experimentally that neural networks trained in this framework focus on relations remaining invariant across environments and ignore unstable ones. Moreover, we prove that the recovered stable relations correspond to the true causal mechanisms under certain conditions. In both regression and classification, the resulting models generalize well to unseen scenarios where traditionally trained models fail.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/29/2022

Causal de Finetti: On the Identification of Invariant Causal Structure in Exchangeable Data

Learning invariant causal structure often relies on conditional independ...
research
06/02/2023

Learning Causally Disentangled Representations via the Principle of Independent Causal Mechanisms

Learning disentangled causal representations is a challenging problem th...
research
04/01/2020

A theory of independent mechanisms for extrapolation in generative models

Deep generative models reproduce complex empirical data but cannot extra...
research
10/12/2020

The Risks of Invariant Risk Minimization

Invariant Causal Prediction (Peters et al., 2016) is a technique for out...
research
08/21/2020

Amortized learning of neural causal representations

Causal models can compactly and efficiently encode the data-generating p...
research
02/20/2020

I-SPEC: An End-to-End Framework for Learning Transportable, Shift-Stable Models

Shifts in environment between development and deployment cause classical...
research
05/28/2019

Semi-Supervised Learning, Causality and the Conditional Cluster Assumption

While the success of semi-supervised learning (SSL) is still not fully u...

Please sign up or login with your details

Forgot password? Click here to reset