Degeneration in VAE: in the Light of Fisher Information Loss

02/19/2018
by   Huangjie Zheng, et al.
0

Variational Autoencoder (VAE) is one of the most popular generative models, and enormous advances have been explored in recent years. Due to the increasing complexity of the raw data and the model architecture, deep networks are needed in VAE models while few works discuss their impacts. According to our observation, VAE does not always benefit from deeper architecture: 1) Deeper encoder makes VAE learn more comprehensible latent representations, while results in blurry reconstruction samples; 2) Deeper decoder ensures more high-quality generations, while the latent representations become abstruse; 3) When encoder and decoder both go deeper, abstruse latent representation occurs with blurry reconstruction samples at same time. In this paper, we deduce a Fisher information measure for the corresponding analysis. With such measure, we demonstrate that information loss is ineluctable in feed-forward networks and causes the previous three types of degeneration, especially when the network goes deeper. We also demonstrate that skip connections benefit the preservation of information amount, thus propose a VAE enhanced by skip connections, named SCVAE. In the experiments, SCVAE is shown to mitigate the information loss and to achieve a promising performance in both encoding and decoding tasks. Moreover, SCVAE can be adaptive to other state-of-the-art variants of VAE for further amelioration.

READ FULL TEXT

page 1

page 6

research
06/02/2019

Generating Diverse High-Fidelity Images with VQ-VAE-2

We explore the use of Vector Quantized Variational AutoEncoder (VQ-VAE) ...
research
12/17/2018

Variational Autoencoders Pursue PCA Directions (by Accident)

The Variational Autoencoder (VAE) is a powerful architecture capable of ...
research
02/04/2022

Robust Vector Quantized-Variational Autoencoder

Image generative models can learn the distributions of the training data...
research
02/08/2017

A Hybrid Convolutional Variational Autoencoder for Text Generation

In this paper we explore the effect of architectural choices on learning...
research
04/24/2019

Generated Loss and Augmented Training of MNIST VAE

The variational autoencoder (VAE) framework is a popular option for trai...
research
05/19/2017

Multi-Stage Variational Auto-Encoders for Coarse-to-Fine Image Generation

Variational auto-encoder (VAE) is a powerful unsupervised learning frame...
research
02/27/2017

Improved Variational Autoencoders for Text Modeling using Dilated Convolutions

Recent work on generative modeling of text has found that variational au...

Please sign up or login with your details

Forgot password? Click here to reset