Constraining Variational Inference with Geometric Jensen-Shannon Divergence

by   Jacob Deasy, et al.

We examine the problem of controlling divergences for latent space regularisation in variational autoencoders. Specifically, when aiming to reconstruct example x∈R^m via latent space z∈R^n (n≤ m), while balancing this against the need for generalisable latent representations. We present a regularisation mechanism based on the skew geometric-Jensen-Shannon divergence (JS^G_α). We find a variation in JS^G_α, motivated by limiting cases, which leads to an intuitive interpolation between forward and reverse KL in the space of both distributions and divergences. We motivate its potential benefits for VAEs through low-dimensional examples, before presenting quantitative and qualitative results. Our experiments demonstrate that skewing our variant of JS^G_α, in the context of JS^G_α-VAEs, leads to better reconstruction and generation when compared to several baseline VAEs. Our approach is entirely unsupervised and utilises only one hyperparameter which can be easily interpreted in latent space.



There are no comments yet.


page 8

page 18


Discrete Variational Attention Models for Language Generation

Variational autoencoders have been widely applied for natural language g...

Balancing reconstruction error and Kullback-Leibler divergence in Variational Autoencoders

In the loss function of Variational Autoencoders there is a well known t...

Revisiting Latent-Space Interpolation via a Quantitative Evaluation Framework

Latent-space interpolation is commonly used to demonstrate the generaliz...

Forward Amortized Inference for Likelihood-Free Variational Marginalization

In this paper, we introduce a new form of amortized variational inferenc...

Natural Language Generation with Neural Variational Models

In this thesis, we explore the use of deep neural networks for generatio...

Transport Score Climbing: Variational Inference Using Forward KL and Adaptive Neural Transport

Variational inference often minimizes the "reverse" Kullbeck-Leibler (KL...

Code Repositories

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.