InfoVAE: Information Maximizing Variational Autoencoders

06/07/2017
by   Shengjia Zhao, et al.
0

It has been previously observed that variational autoencoders tend to ignore the latent code when combined with a decoding distribution that is too flexible. This undermines the purpose of unsupervised representation learning. In this paper, we additionally show that existing training criteria can lead to extremely poor amortized inference distributions and overestimation of the posterior variance, even when trained to optimality. We identify the reason for both short-comings in the regularization term used in the ELBO criterion to match the variational posterior to the latent prior distribution. We propose a class of training criteria termed InfoVAE that solves the two problems. We show that these models maximize the mutual information between input and latent features, make effective use of the latent features regardless of the flexibility of the decoding distribution, and avoid the variance over-estimation problem. Through extensive qualitative and quantitative analyses, we demonstrate that our models do not suffer from these problems, and outperform models trained with ELBO on multiple metrics of performance.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset