FONDUE: an algorithm to find the optimal dimensionality of the latent representations of variational autoencoders

09/26/2022
by   Lisa Bonheme, et al.
0

When training a variational autoencoder (VAE) on a given dataset, determining the optimal number of latent variables is mostly done by grid search: a costly process in terms of computational time and carbon footprint. In this paper, we explore the intrinsic dimension estimation (IDE) of the data and latent representations learned by VAEs. We show that the discrepancies between the IDE of the mean and sampled representations of a VAE after only a few steps of training reveal the presence of passive variables in the latent space, which, in well-behaved VAEs, indicates a superfluous number of dimensions. Using this property, we propose FONDUE: an algorithm which quickly finds the number of latent dimensions after which the mean and sampled representations start to diverge (i.e., when passive variables are introduced), providing a principled method for selecting the number of latent dimensions for VAEs and autoencoders.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/08/2020

tvGP-VAE: Tensor-variate Gaussian Process Prior Variational Autoencoder

Variational autoencoders (VAEs) are a powerful class of deep generative ...
research
07/19/2018

Bounded Information Rate Variational Autoencoders

This paper introduces a new member of the family of Variational Autoenco...
research
03/24/2020

Dynamic Narrowing of VAE Bottlenecks Using GECO and L_0 Regularization

When designing variational autoencoders (VAEs) or other types of latent ...
research
09/26/2021

Be More Active! Understanding the Differences between Mean and Sampled Representations of Variational Autoencoders

The ability of Variational Autoencoders to learn disentangled representa...
research
08/08/2022

Sparse Representation Learning with Modified q-VAE towards Minimal Realization of World Model

Extraction of low-dimensional latent space from high-dimensional observa...
research
02/09/2022

Covariate-informed Representation Learning with Samplewise Optimal Identifiable Variational Autoencoders

Recently proposed identifiable variational autoencoder (iVAE, Khemakhem ...
research
05/12/2020

Jigsaw-VAE: Towards Balancing Features in Variational Autoencoders

The latent variables learned by VAEs have seen considerable interest as ...

Please sign up or login with your details

Forgot password? Click here to reset