WeLa-VAE: Learning Alternative Disentangled Representations Using Weak Labels

08/22/2020
by   Vasilis Margonis, et al.
0

Learning disentangled representations without supervision or inductive biases, often leads to non-interpretable or undesirable representations. On the other hand, strict supervision requires detailed knowledge of the true generative factors, which is not always possible. In this paper, we consider weak supervision by means of high-level labels that are not assumed to be explicitly related to the ground truth factors. Such labels, while being easier to acquire, can also be used as inductive biases for algorithms to learn more interpretable or alternative disentangled representations. To this end, we propose WeLa-VAE, a variational inference framework where observations and labels share the same latent variables, which involves the maximization of a modified variational lower bound and total correlation regularization. Our method is a generalization of TCVAE, adding only one extra hyperparameter. We experiment on a dataset generated by Cartesian coordinates and we show that, while a TCVAE learns a factorized Cartesian representation, given weak labels of distance and angle, WeLa-VAE is able to learn and disentangle a polar representation. This is achieved without the need of refined labels or having to adjust the number of layers, the optimization parameters, or the total correlation hyperparameter.

READ FULL TEXT

page 2

page 6

page 7

page 12

research
12/11/2019

A Closer Look at Disentangling in β-VAE

In many data analysis tasks, it is beneficial to learn representations w...
research
04/18/2023

CF-VAE: Causal Disentangled Representation Learning with VAE and Causal Flows

Learning disentangled representations is important in representation lea...
research
02/12/2021

Demystifying Inductive Biases for β-VAE Based Architectures

The performance of β-Variational-Autoencoders (β-VAEs) and their variant...
research
01/24/2019

Learning Disentangled Representations with Reference-Based Variational Autoencoders

Learning disentangled representations from visual data, where different ...
research
02/28/2023

Representation Disentaglement via Regularization by Identification

This work focuses on the problem of learning disentangled representation...
research
09/12/2020

Revisiting Factorizing Aggregated Posterior in Learning Disentangled Representations

In the problem of learning disentangled representations, one of the prom...
research
08/24/2023

Disentanglement Learning via Topology

We propose TopDis (Topological Disentanglement), a method for learning d...

Please sign up or login with your details

Forgot password? Click here to reset