Variational Information Bottleneck on Vector Quantized Autoencoders

08/02/2018
by   Hanwei Wu, et al.
0

In this paper, we provide an information-theoretic interpretation of the Vector Quantized-Variational Autoencoder (VQ-VAE). We show that the loss function of the original VQ-VAE can be derived from the variational deterministic information bottleneck (VDIB) principle. On the other hand, the VQ-VAE trained by the Expectation Maximization (EM) algorithm can be viewed as an approximation to the variational information bottleneck(VIB) principle.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/13/2020

PRI-VAE: Principle-of-Relevant-Information Variational Autoencoders

Although substantial efforts have been made to learn disentangled repres...
research
05/28/2018

Theory and Experiments on Vector Quantized Autoencoders

Deep neural networks with discrete latent variables offer the promise of...
research
11/19/2020

End-To-End Dilated Variational Autoencoder with Bottleneck Discriminative Loss for Sound Morphing – A Preliminary Study

We present a preliminary study on an end-to-end variational autoencoder ...
research
10/19/2012

The Information Bottleneck EM Algorithm

Learning with hidden variables is a central challenge in probabilistic g...
research
06/11/2020

A Generalised Linear Model Framework for Variational Autoencoders based on Exponential Dispersion Families

Although variational autoencoders (VAE) are successfully used to obtain ...
research
02/10/2021

Deep Variational Autoencoder with Shallow Parallel Path for Top-N Recommendation (VASP)

Recently introduced EASE algorithm presents a simple and elegant way, ho...
research
10/10/2021

NormVAE: Normative Modeling on Neuroimaging Data using Variational Autoencoders

Normative modeling is an emerging method for understanding the heterogen...

Please sign up or login with your details

Forgot password? Click here to reset