Unpaired Multi-Domain Causal Representation Learning

02/02/2023
by   Nils Sturma, et al.
0

The goal of causal representation learning is to find a representation of data that consists of causally related latent variables. We consider a setup where one has access to data from multiple domains that potentially share a causal representation. Crucially, observations in different domains are assumed to be unpaired, that is, we only observe the marginal distribution in each domain but not their joint distribution. In this paper, we give sufficient conditions for identifiability of the joint distribution and the shared causal graph in a linear setup. Identifiability holds if we can uniquely recover the joint distribution and the shared causal representation from the marginal distributions in each domain. We transform our identifiability results into a practical method to recover the shared latent causal graph. Moreover, we study how multiple domains reduce errors in falsely detecting shared causal variables in the finite data setting.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/11/2023

A Causal Ordering Prior for Unsupervised Representation Learning

Unsupervised representation learning with variational inference relies h...
research
02/09/2019

Multi-Domain Translation by Learning Uncoupled Autoencoders

Multi-domain translation seeks to learn a probabilistic coupling between...
research
06/26/2023

Leveraging Task Structures for Improved Identifiability in Neural Network Representations

This work extends the theory of identifiability in supervised learning b...
research
06/06/2021

A Meta Learning Approach to Discerning Causal Graph Structure

We explore the usage of meta-learning to derive the causal direction bet...
research
09/02/2019

Unifying Causal Models with Trek Rules

In many scientific contexts, different investigators experiment with or ...
research
07/12/2022

Language-Based Causal Representation Learning

Consider the finite state graph that results from a simple, discrete, dy...
research
06/08/2021

Self-Supervised Learning with Data Augmentations Provably Isolates Content from Style

Self-supervised representation learning has shown remarkable success in ...

Please sign up or login with your details

Forgot password? Click here to reset