Variantional autoencoder with decremental information bottleneck for disentanglement

03/22/2023
by   Jiantao Wu, et al.
0

One major challenge of disentanglement learning with variational autoencoders is the trade-off between disentanglement and reconstruction fidelity. Previous incremental methods with only on latent space cannot optimize these two targets simultaneously, so they expand the Information Bottleneck while training to optimize from disentanglement to reconstruction. However, a large bottleneck will lose the constraint of disentanglement, causing the information diffusion problem. To tackle this issue, we present a novel decremental variational autoencoder with disentanglement-invariant transformations to optimize multiple objectives in different layers, termed DeVAE, for balancing disentanglement and reconstruction fidelity by decreasing the information bottleneck of diverse latent spaces gradually. Benefiting from the multiple latent spaces, DeVAE allows simultaneous optimization of multiple objectives to optimize reconstruction while keeping the constraint of disentanglement, avoiding information diffusion. DeVAE is also compatible with large models with high-dimension latent space. Experimental results on dSprites and Shapes3D that DeVAE achieves R2q6a good balance between disentanglement and reconstruction.DeVAE shows high tolerant of hyperparameters and on high-dimensional latent spaces.

READ FULL TEXT

page 3

page 6

page 7

page 8

page 15

page 18

page 19

page 20

research
06/09/2022

GCVAE: Generalized-Controllable Variational AutoEncoder

Variational autoencoders (VAEs) have recently been used for unsupervised...
research
09/04/2023

Are We Using Autoencoders in a Wrong Way?

Autoencoders are certainly among the most studied and used Deep Learning...
research
01/25/2019

Diffusion Variational Autoencoders

A standard Variational Autoencoder, with a Euclidean latent space, is st...
research
07/13/2020

Drum Beats and Where To Find Them: Sampling Drum Patterns from a Latent Space

This paper presents a large dataset of drum patterns and compares two di...
research
10/19/2020

Evidential Sparsification of Multimodal Latent Spaces in Conditional Variational Autoencoders

Discrete latent spaces in variational autoencoders have been shown to ef...
research
03/19/2020

Disentanglement with Hyperspherical Latent Spaces using Diffusion Variational Autoencoders

A disentangled representation of a data set should be capable of recover...
research
01/30/2020

Conditioning Autoencoder Latent Spaces for Real-Time Timbre Interpolation and Synthesis

We compare standard autoencoder topologies' performances for timbre gene...

Please sign up or login with your details

Forgot password? Click here to reset