DeepAI
Log In Sign Up

Causally Invariant Predictor with Shift-Robustness

07/05/2021
by   Xiangyu Zheng, et al.
0

This paper proposes an invariant causal predictor that is robust to distribution shift across domains and maximally reserves the transferable invariant information. Based on a disentangled causal factorization, we formulate the distribution shift as soft interventions in the system, which covers a wide range of cases for distribution shift as we do not make prior specifications on the causal structure or the intervened variables. Instead of imposing regularizations to constrain the invariance of the predictor, we propose to predict by the intervened conditional expectation based on the do-operator and then prove that it is invariant across domains. More importantly, we prove that the proposed predictor is the robust predictor that minimizes the worst-case quadratic loss among the distributions of all domains. For empirical learning, we propose an intuitive and flexible estimating method based on data regeneration and present a local causal discovery procedure to guide the regeneration step. The key idea is to regenerate data such that the regenerated distribution is compatible with the intervened graph, which allows us to incorporate standard supervised learning methods with the regenerated data. Experimental results on both synthetic and real data demonstrate the efficacy of our predictor in improving the predictive accuracy and robustness across domains.

READ FULL TEXT

page 1

page 2

page 3

page 4

07/04/2022

Invariant and Transportable Representations for Anti-Causal Domain Shifts

Real-world classification problems must contend with domain shift, the (...
01/15/2021

Harmonization and the Worst Scanner Syndrome

We show that for a wide class of harmonization/domain-invariance schemes...
08/04/2020

Out-of-Distribution Generalization with Maximal Invariant Predictor

Out-of-Distribution (OOD) generalization problem is a problem of seeking...
08/15/2022

A Unified Causal View of Domain Invariant Representation Learning

Machine learning methods can be unreliable when deployed in domains that...
07/20/2022

Learning Counterfactually Invariant Predictors

We propose a method to learn predictors that are invariant under counter...
05/13/2021

Causally-motivated Shortcut Removal Using Auxiliary Labels

Robustness to certain distribution shifts is a key requirement in many M...
05/31/2022

Evaluating Robustness to Dataset Shift via Parametric Robustness Sets

We give a method for proactively identifying small, plausible shifts in ...