Disentanglement by Nonlinear ICA with General Incompressible-flow Networks (GIN)

01/14/2020
by   Peter Sorrenson, et al.
65

A central question of representation learning asks under which conditions it is possible to reconstruct the true latent variables of an arbitrarily complex generative process. Recent breakthrough work by Khemakhem et al. (2019) on nonlinear ICA has answered this question for a broad class of conditional generative processes. We extend this important result in a direction relevant for application to real-world data. First, we generalize the theory to the case of unknown intrinsic problem dimension and prove that in some special (but not very restrictive) cases, informative latent variables will be automatically separated from noise by an estimating model. Furthermore, the recovered informative latent variables will be in one-to-one correspondence with the true latent variables of the generating process, up to a trivial component-wise transformation. Second, we introduce a modification of the RealNVP invertible neural network architecture (Dinh et al. (2016)) which is particularly suitable for this type of problem: the General Incompressible-flow Network (GIN). Experiments on artificial data and EMNIST demonstrate that theoretical predictions are indeed verified in practice. In particular, we provide a detailed set of exactly 22 informative latent variables extracted from EMNIST.

READ FULL TEXT

page 7

page 17

page 18

page 19

page 20

page 21

page 22

page 23

research
07/10/2019

Variational Autoencoders and Nonlinear ICA: A Unifying Framework

The framework of variational autoencoders allows us to efficiently learn...
research
07/05/2023

Additive Decoders for Latent Variables Identification and Cartesian-Product Extrapolation

We tackle the problems of latent variables identification and "out-of-su...
research
03/30/2016

A latent-observed dissimilarity measure

Quantitatively assessing relationships between latent variables and obse...
research
10/11/2021

Learning Temporally Causal Latent Processes from General Temporal Data

Our goal is to recover time-delayed latent causal variables and identify...
research
07/21/2021

Discovering Latent Causal Variables via Mechanism Sparsity: A New Principle for Nonlinear ICA

It can be argued that finding an interpretable low-dimensional represent...
research
02/09/2023

Trading Information between Latents in Hierarchical Variational Autoencoders

Variational Autoencoders (VAEs) were originally motivated (Kingma We...
research
12/15/2022

Let's consider more general nonlinear approaches to study teleconnections of climate variables

The recent work by (Rieger et al 2021) is concerned with the problem of ...

Please sign up or login with your details

Forgot password? Click here to reset