Variational Mutual Information Maximization Framework for VAE Latent Codes with Continuous and Discrete Priors

06/02/2020
by   Andriy Serdega, et al.
0

Learning interpretable and disentangled representations of data is a key topic in machine learning research. Variational Autoencoder (VAE) is a scalable method for learning directed latent variable models of complex data. It employs a clear and interpretable objective that can be easily optimized. However, this objective does not provide an explicit measure for the quality of latent variable representations which may result in their poor quality. We propose Variational Mutual Information Maximization Framework for VAE to address this issue. In comparison to other methods, it provides an explicit objective that maximizes lower bound on mutual information between latent codes and observations. The objective acts as a regularizer that forces VAE to not ignore the latent variable and allows one to select particular components of it to be most informative with respect to the observations. On top of that, the proposed framework provides a way to evaluate mutual information between latent codes and observations for a fixed VAE model. We have conducted our experiments on VAE models with Gaussian and joint Gaussian and discrete latent variables. Our results illustrate that the proposed approach strengthens relationships between latent codes and observations and improves learned representations.

READ FULL TEXT

page 25

page 27

page 34

page 35

page 36

page 40

research
05/28/2020

VMI-VAE: Variational Mutual Information Maximization Framework for VAE With Discrete and Continuous Priors

Variational Autoencoder is a scalable method for learning latent variabl...
research
04/08/2020

Learning Discrete Structured Representations by Adversarially Maximizing Mutual Information

We propose learning discrete structured representations from unlabeled d...
research
02/14/2018

Isolating Sources of Disentanglement in Variational Autoencoders

We decompose the evidence lower bound to show the existence of a term me...
research
10/17/2022

Break The Spell Of Total Correlation In betaTCVAE

This paper proposes a way to break the spell of total correlation in bet...
research
08/17/2019

Improve variational autoEncoder with auxiliary softmax multiclassifier

As a general-purpose generative model architecture, VAE has been widely ...
research
02/18/2020

SentenceMIM: A Latent Variable Language Model

We introduce sentenceMIM, a probabilistic auto-encoder for language mode...
research
12/09/2019

Variational Autoencoder Trajectory Primitives with Continuous and Discrete Latent Codes

Imitation learning is an intuitive approach for teaching motion to robot...

Please sign up or login with your details

Forgot password? Click here to reset