DeepAI AI Chat
Log In Sign Up

Deconfounding and Causal Regularization for Stability and External Validity

08/14/2020
by   Peter Bühlmann, et al.
0

We review some recent work on removing hidden confounding and causal regularization from a unified viewpoint. We describe how simple and user-friendly techniques improve stability, replicability and distributional robustness in heterogeneous data. In this sense, we provide additional thoughts to the issue on concept drift, raised by Efron (2020), when the data generating distribution is changing.

READ FULL TEXT

page 1

page 2

page 3

page 4

02/18/2022

Interpolation and Regularization for Causal Learning

We study the problem of learning causal models from observational data t...
10/27/2018

Removing Hidden Confounding by Experimental Grounding

Observational data is increasingly used as a means for making individual...
01/28/2019

Inferring Heterogeneous Causal Effects in Presence of Spatial Confounding

We address the problem of inferring the causal effect of an exposure on ...
09/19/2022

Distributionally robust and generalizable inference

We discuss recently developed methods that quantify the stability and ge...
03/13/2020

HRTF Individualization: A Survey

The individuality of head-related transfer functions (HRTFs) is a key is...
12/19/2018

Invariance, Causality and Robustness

We discuss recent work for causal inference and predictive robustness in...