Generated Loss and Augmented Training of MNIST VAE

04/24/2019
by   Jason Chou, et al.
20

The variational autoencoder (VAE) framework is a popular option for training unsupervised generative models, featuring ease of training and latent representation of data. The objective function of VAE does not guarantee to achieve the latter, however, and failure to do so leads to a frequent failure mode called posterior collapse. Even in successful cases, VAEs often result in low-precision reconstructions and generated samples. The introduction of the KL-divergence weight β can help steer the model clear of posterior collapse, but its tuning is often a trial-and-error process with no guiding metrics. Here we test the idea of using the total VAE loss of generated samples (generated loss) as the proxy metric for generation quality, the related hypothesis that VAE reconstruction from the mean latent vector tends to be a more typical example of its class than the original, and the idea of exploiting this property by augmenting training data with generated variants (augmented training). The results are mixed, but repeated encoding and decoding indeed result in qualitatively and quantitatively more typical examples from both convolutional and fully-connected MNIST VAEs, suggesting that it may be an inherent property of the VAE framework.

READ FULL TEXT

page 5

page 6

page 12

page 15

page 16

page 19

research
04/23/2019

Generated Loss, Augmented Training, and Multiscale VAE

The variational autoencoder (VAE) framework remains a popular option for...
research
10/29/2019

Bridging the ELBO and MMD

One of the challenges in training generative models such as the variatio...
research
05/21/2020

Unsupposable Test-data Generation for Machine-learned Software

As for software development by machine learning, a trained model is eval...
research
12/24/2020

Soft-IntroVAE: Analyzing and Improving the Introspective Variational Autoencoder

The recently introduced introspective variational autoencoder (IntroVAE)...
research
09/29/2022

Training β-VAE by Aggregating a Learned Gaussian Posterior with a Decoupled Decoder

The reconstruction loss and the Kullback-Leibler divergence (KLD) loss i...
research
02/19/2018

Degeneration in VAE: in the Light of Fisher Information Loss

Variational Autoencoder (VAE) is one of the most popular generative mode...
research
12/22/2018

Can VAEs Generate Novel Examples?

An implicit goal in works on deep generative models is that such models ...

Please sign up or login with your details

Forgot password? Click here to reset