Log In Sign Up

Mutual Information Constraints for Monte-Carlo Objectives

by   Gábor Melis, et al.

A common failure mode of density models trained as variational autoencoders is to model the data without relying on their latent variables, rendering these variables useless. Two contributing factors, the underspecification of the model and the looseness of the variational lower bound, have been studied separately in the literature. We weave these two strands of research together, specifically the tighter bounds of Monte-Carlo objectives and constraints on the mutual information between the observable and the latent variables. Estimating the mutual information as the average Kullback-Leibler divergence between the easily available variational posterior q(z|x) and the prior does not work with Monte-Carlo objectives because q(z|x) is no longer a direct approximation to the model's true posterior p(z|x). Hence, we construct estimators of the Kullback-Leibler divergence of the true posterior from the prior by recycling samples used in the objective, with which we train models of continuous and discrete latents at much improved rate-distortion and no posterior collapse. While alleviated, the tradeoff between modelling the data and using the latents still remains, and we urge for evaluating inference methods across a range of mutual information values.


page 1

page 2

page 3

page 4


Improved Variational Neural Machine Translation by Promoting Mutual Information

Posterior collapse plagues VAEs for text, especially for conditional tex...

Forget-me-not! Contrastive Critics for Mitigating Posterior Collapse

Variational autoencoders (VAEs) suffer from posterior collapse, where th...

Estimators of Entropy and Information via Inference in Probabilistic Models

Estimating information-theoretic quantities such as entropy and mutual i...

Action and Perception as Divergence Minimization

We introduce a unified objective for action and perception of intelligen...

Notes on Icebreaker

Icebreaker [1] is new research from MSR that is able to achieve state of...

Tensor Monte Carlo: particle methods for the GPU era

Multi-sample objectives improve over single-sample estimates by giving t...

InfoVAE: Information Maximizing Variational Autoencoders

It has been previously observed that variational autoencoders tend to ig...