DeepAI AI Chat
Log In Sign Up

Relaxed-Responsibility Hierarchical Discrete VAEs

by   Matthew Willetts, et al.

Successfully training Variational Autoencoders (VAEs) with a hierarchy of discrete latent variables remains an area of active research. Leveraging insights from classical methods of inference we introduce Relaxed-Responsibility Vector-Quantisation, a novel way to parameterise discrete latent variables, a refinement of relaxed Vector-Quantisation. This enables a novel approach to hierarchical discrete variational autoencoder with numerous layers of latent variables that we train end-to-end. Unlike discrete VAEs with a single layer of latent variables, we can produce realistic-looking samples by ancestral sampling: it is not essential to train a second generative model over the learnt latent representations to then sample from and then decode. Further, we observe different layers of our model become associated with different aspects of the data.


page 3

page 9

page 17

page 18


Discrete Variational Autoencoders

Probabilistic models with discrete latent variables naturally capture da...

Hierarchical Sketch Induction for Paraphrase Generation

We propose a generative model of paraphrase generation, that encourages ...

Ladder Variational Autoencoders

Variational Autoencoders are powerful models for unsupervised learning. ...

Hierarchical Quantized Autoencoders

Despite progress in training neural networks for lossy image compression...

Autoregressive Co-Training for Learning Discrete Speech Representations

While several self-supervised approaches for learning discrete speech re...

A Hierarchical Latent Structure for Variational Conversation Modeling

Variational autoencoders (VAE) combined with hierarchical RNNs have emer...

Extractive Summary as Discrete Latent Variables

In this paper, we compare various methods to compress a text using a neu...