Product of Orthogonal Spheres Parameterization for Disentangled Representation Learning

07/22/2019
by   Ankita Shukla, et al.
7

Learning representations that can disentangle explanatory attributes underlying the data improves interpretabilty as well as provides control on data generation. Various learning frameworks such as VAEs, GANs and auto-encoders have been used in the literature to learn such representations. Most often, the latent space is constrained to a partitioned representation or structured by a prior to impose disentangling. In this work, we advance the use of a latent representation based on a product space of Orthogonal Spheres PrOSe. The PrOSe model is motivated by the reasoning that latent-variables related to the physics of image-formation can under certain relaxed assumptions lead to spherical-spaces. Orthogonality between the spheres is motivated via physical independence models. Imposing the orthogonal-sphere constraint is much simpler than other complicated physical models, is fairly general and flexible, and extensible beyond the factors used to motivate its development. Under further relaxed assumptions of equal-sized latent blocks per factor, the constraint can be written down in closed form as an ortho-normality term in the loss function. We show that our approach improves the quality of disentanglement significantly. We find consistent improvement in disentanglement compared to several state-of-the-art approaches, across several benchmarks and metrics.

READ FULL TEXT

page 1

page 8

page 9

research
07/11/2023

A Causal Ordering Prior for Unsupervised Representation Learning

Unsupervised representation learning with variational inference relies h...
research
06/14/2023

InfoDiffusion: Representation Learning Using Information Maximizing Diffusion Models

While diffusion models excel at generating high-quality samples, their l...
research
06/13/2022

Local distance preserving auto-encoders using Continuous k-Nearest Neighbours graphs

Auto-encoder models that preserve similarities in the data are a popular...
research
10/01/2021

Unsupervised Belief Representation Learning in Polarized Networks with Information-Theoretic Variational Graph Auto-Encoders

This paper develops a novel unsupervised algorithm for belief representa...
research
03/30/2023

Multifactor Sequential Disentanglement via Structured Koopman Autoencoders

Disentangling complex data to its latent factors of variation is a funda...
research
10/02/2021

Inference-InfoGAN: Inference Independence via Embedding Orthogonal Basis Expansion

Disentanglement learning aims to construct independent and interpretable...

Please sign up or login with your details

Forgot password? Click here to reset