Constraining Variational Inference with Geometric Jensen-Shannon Divergence

06/18/2020
by   Jacob Deasy, et al.
17

We examine the problem of controlling divergences for latent space regularisation in variational autoencoders. Specifically, when aiming to reconstruct example x∈R^m via latent space z∈R^n (n≤ m), while balancing this against the need for generalisable latent representations. We present a regularisation mechanism based on the skew geometric-Jensen-Shannon divergence (JS^G_α). We find a variation in JS^G_α, motivated by limiting cases, which leads to an intuitive interpolation between forward and reverse KL in the space of both distributions and divergences. We motivate its potential benefits for VAEs through low-dimensional examples, before presenting quantitative and qualitative results. Our experiments demonstrate that skewing our variant of JS^G_α, in the context of JS^G_α-VAEs, leads to better reconstruction and generation when compared to several baseline VAEs. Our approach is entirely unsupervised and utilises only one hyperparameter which can be easily interpreted in latent space.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 8

page 18

04/21/2020

Discrete Variational Attention Models for Language Generation

Variational autoencoders have been widely applied for natural language g...
02/18/2020

Balancing reconstruction error and Kullback-Leibler divergence in Variational Autoencoders

In the loss function of Variational Autoencoders there is a well known t...
10/13/2021

Revisiting Latent-Space Interpolation via a Quantitative Evaluation Framework

Latent-space interpolation is commonly used to demonstrate the generaliz...
05/29/2018

Forward Amortized Inference for Likelihood-Free Variational Marginalization

In this paper, we introduce a new form of amortized variational inferenc...
08/27/2018

Natural Language Generation with Neural Variational Models

In this thesis, we explore the use of deep neural networks for generatio...
02/03/2022

Transport Score Climbing: Variational Inference Using Forward KL and Adaptive Neural Transport

Variational inference often minimizes the "reverse" Kullbeck-Leibler (KL...

Code Repositories

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.