Disentangling Variational Autoencoders

11/14/2022
by   Rafael Pastrana, et al.
0

A variational autoencoder (VAE) is a probabilistic machine learning framework for posterior inference that projects an input set of high-dimensional data to a lower-dimensional, latent space. The latent space learned with a VAE offers exciting opportunities to develop new data-driven design processes in creative disciplines, in particular, to automate the generation of multiple novel designs that are aesthetically reminiscent of the input data but that were unseen during training. However, the learned latent space is typically disorganized and entangled: traversing the latent space along a single dimension does not result in changes to single visual attributes of the data. The lack of latent structure impedes designers from deliberately controlling the visual attributes of new designs generated from the latent space. This paper presents an experimental study that investigates latent space disentanglement. We implement three different VAE models from the literature and train them on a publicly available dataset of 60,000 images of hand-written digits. We perform a sensitivity analysis to find a small number of latent dimensions necessary to maximize a lower bound to the log marginal likelihood of the data. Furthermore, we investigate the trade-offs between the quality of the reconstruction of the decoded images and the level of disentanglement of the latent space. We are able to automatically align three latent dimensions with three interpretable visual properties of the digits: line weight, tilt and width. Our experiments suggest that i) increasing the contribution of the Kullback-Leibler divergence between the prior over the latents and the variational distribution to the evidence lower bound, and ii) conditioning input image class enhances the learning of a disentangled latent space with a VAE.

READ FULL TEXT

page 2

page 4

page 6

research
07/14/2017

GLSR-VAE: Geodesic Latent Space Regularization for Variational AutoEncoder Architectures

VAEs (Variational AutoEncoders) have proved to be powerful in the contex...
research
01/13/2022

Reproducible, incremental representation learning with Rosetta VAE

Variational autoencoders are among the most popular methods for distilli...
research
10/28/2021

Probabilistic Autoencoder using Fisher Information

Neural Networks play a growing role in many science disciplines, includi...
research
08/08/2022

Sparse Representation Learning with Modified q-VAE towards Minimal Realization of World Model

Extraction of low-dimensional latent space from high-dimensional observa...
research
02/04/2023

PartitionVAE – a human-interpretable VAE

VAEs, or variational autoencoders, are autoencoders that explicitly lear...
research
09/09/2019

Balancing Reconstruction Quality and Regularisation in ELBO for VAEs

A trade-off exists between reconstruction quality and the prior regulari...
research
07/30/2021

Data-driven modeling of time-domain induced polarization

We present a novel approach for data-driven modeling of the time-domain ...

Please sign up or login with your details

Forgot password? Click here to reset