Quantifying the Effects of Enforcing Disentanglement on Variational Autoencoders

11/24/2017
by   Momchil Peychev, et al.
0

The notion of disentangled autoencoders was proposed as an extension to the variational autoencoder by introducing a disentanglement parameter β, controlling the learning pressure put on the possible underlying latent representations. For certain values of β this kind of autoencoders is capable of encoding independent input generative factors in separate elements of the code, leading to a more interpretable and predictable model behaviour. In this paper we quantify the effects of the parameter β on the model performance and disentanglement. After training multiple models with the same value of β, we establish the existence of consistent variance in one of the disentanglement measures, proposed in literature. The negative consequences of the disentanglement to the autoencoder's discriminative ability are also asserted while varying the amount of examples available during training.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/15/2021

Disentangling Generative Factors in Natural Language with Discrete Variational Autoencoders

The ability of learning disentangled representations represents a major ...
research
05/19/2021

Disentanglement Learning for Variational Autoencoders Applied to Audio-Visual Speech Enhancement

Recently, the standard variational autoencoder has been successfully use...
research
05/17/2022

How do Variational Autoencoders Learn? Insights from Representational Similarity

The ability of Variational Autoencoders (VAEs) to learn disentangled rep...
research
03/19/2020

Disentanglement with Hyperspherical Latent Spaces using Diffusion Variational Autoencoders

A disentangled representation of a data set should be capable of recover...
research
05/14/2021

DoS and DDoS Mitigation Using Variational Autoencoders

DoS and DDoS attacks have been growing in size and number over the last ...
research
10/07/2019

Negative Sampling in Variational Autoencoders

We propose negative sampling as an approach to improve the notoriously b...
research
04/09/2018

Scalable Factorized Hierarchical Variational Autoencoder Training

Deep generative models have achieved great success in unsupervised learn...

Please sign up or login with your details

Forgot password? Click here to reset