Additive Decoders for Latent Variables Identification and Cartesian-Product Extrapolation

07/05/2023
by   Sébastien Lachapelle, et al.
0

We tackle the problems of latent variables identification and "out-of-support" image generation in representation learning. We show that both are possible for a class of decoders that we call additive, which are reminiscent of decoders used for object-centric representation learning (OCRL) and well suited for images that can be decomposed as a sum of object-specific images. We provide conditions under which exactly solving the reconstruction problem using an additive decoder is guaranteed to identify the blocks of latent variables up to permutation and block-wise invertible transformations. This guarantee relies only on very weak assumptions about the distribution of the latent factors, which might present statistical dependencies and have an almost arbitrarily shaped support. Our result provides a new setting where nonlinear independent component analysis (ICA) is possible and adds to our theoretical understanding of OCRL methods. We also show theoretically that additive decoders can generate novel images by recombining observed factors of variations in novel ways, an ability we refer to as Cartesian-product extrapolation. We show empirically that additivity is crucial for both identifiability and extrapolation on simulated data.

READ FULL TEXT

page 9

page 33

page 34

page 35

research
07/11/2023

A Causal Ordering Prior for Unsupervised Representation Learning

Unsupervised representation learning with variational inference relies h...
research
06/02/2022

Weakly Supervised Representation Learning with Sparse Perturbations

The theory of representation learning aims to build methods that provabl...
research
01/14/2020

Disentanglement by Nonlinear ICA with General Incompressible-flow Networks (GIN)

A central question of representation learning asks under which condition...
research
07/21/2021

Discovering Latent Causal Variables via Mechanism Sparsity: A New Principle for Nonlinear ICA

It can be argued that finding an interpretable low-dimensional represent...
research
06/05/2021

Local Disentanglement in Variational Auto-Encoders Using Jacobian L_1 Regularization

There have been many recent advances in representation learning; however...
research
06/28/2023

Identifiability of Discretized Latent Coordinate Systems via Density Landmarks Detection

Disentanglement aims to recover meaningful latent ground-truth factors f...
research
10/29/2021

Properties from Mechanisms: An Equivariance Perspective on Identifiable Representation Learning

A key goal of unsupervised representation learning is "inverting" a data...

Please sign up or login with your details

Forgot password? Click here to reset