DeepAI AI Chat
Log In Sign Up

Relaxed-Responsibility Hierarchical Discrete VAEs

07/14/2020
by   Matthew Willetts, et al.
0

Successfully training Variational Autoencoders (VAEs) with a hierarchy of discrete latent variables remains an area of active research. Leveraging insights from classical methods of inference we introduce Relaxed-Responsibility Vector-Quantisation, a novel way to parameterise discrete latent variables, a refinement of relaxed Vector-Quantisation. This enables a novel approach to hierarchical discrete variational autoencoder with numerous layers of latent variables that we train end-to-end. Unlike discrete VAEs with a single layer of latent variables, we can produce realistic-looking samples by ancestral sampling: it is not essential to train a second generative model over the learnt latent representations to then sample from and then decode. Further, we observe different layers of our model become associated with different aspects of the data.

READ FULL TEXT

page 3

page 9

page 17

page 18

09/07/2016

Discrete Variational Autoencoders

Probabilistic models with discrete latent variables naturally capture da...
03/07/2022

Hierarchical Sketch Induction for Paraphrase Generation

We propose a generative model of paraphrase generation, that encourages ...
02/06/2016

Ladder Variational Autoencoders

Variational Autoencoders are powerful models for unsupervised learning. ...
02/19/2020

Hierarchical Quantized Autoencoders

Despite progress in training neural networks for lossy image compression...
03/29/2022

Autoregressive Co-Training for Learning Discrete Speech Representations

While several self-supervised approaches for learning discrete speech re...
04/10/2018

A Hierarchical Latent Structure for Variational Conversation Modeling

Variational autoencoders (VAE) combined with hierarchical RNNs have emer...
11/14/2018

Extractive Summary as Discrete Latent Variables

In this paper, we compare various methods to compress a text using a neu...