Improve variational autoEncoder with auxiliary softmax multiclassifier

08/17/2019
by   Yao Li, et al.
0

As a general-purpose generative model architecture, VAE has been widely used in the field of image and natural language processing. VAE maps high dimensional sample data into continuous latent variables with unsupervised learning. Sampling in the latent variable space of the feature, VAE can construct new image or text data. As a general-purpose generation model, the vanilla VAE can not fit well with various data sets and neural networks with different structures. Because of the need to balance the accuracy of reconstruction and the convenience of latent variable sampling in the training process, VAE often has problems known as "posterior collapse". images reconstructed by VAE are also often blurred. In this paper, we analyze the main cause of these problem, which is the lack of mutual information between the sample variable and the latent feature variable during the training process. To maintain mutual information in model training, we propose to use the auxiliary softmax multi-classification network structure to improve the training effect of VAE, named VAE-AS. We use MNIST and Omniglot data sets to test the VAE-AS model. Based on the test results, It can be show that VAE-AS has obvious effects on the mutual information adjusting and solving the posterior collapse problem.

READ FULL TEXT

page 11

page 15

research
11/13/2019

A Stable Variational Autoencoder for Text Modelling

Variational Autoencoder (VAE) is a powerful method for learning represen...
research
05/28/2020

VMI-VAE: Variational Mutual Information Maximization Framework for VAE With Discrete and Continuous Priors

Variational Autoencoder is a scalable method for learning latent variabl...
research
06/02/2020

Variational Mutual Information Maximization Framework for VAE Latent Codes with Continuous and Discrete Priors

Learning interpretable and disentangled representations of data is a key...
research
01/17/2022

Lifelong Generative Learning via Knowledge Reconstruction

Generative models often incur the catastrophic forgetting problem when t...
research
06/18/2018

The Information Autoencoding Family: A Lagrangian Perspective on Latent Variable Generative Models

A variety of learning objectives have been proposed for training latent ...
research
09/03/2019

Improving Disentangled Representation Learning with the Beta Bernoulli Process

To improve the ability of VAE to disentangle in the latent space, existi...
research
08/23/2022

String-based Molecule Generation via Multi-decoder VAE

In this paper, we investigate the problem of string-based molecular gene...

Please sign up or login with your details

Forgot password? Click here to reset