Auxiliary Guided Autoregressive Variational Autoencoders

11/30/2017
by   Thomas Lucas, et al.
0

Generative modeling of high-dimensional data is a key problem in machine learning. Successful approaches include latent variable models and autoregressive models. The complementary strengths of these approaches, to model global and local image statistics respectively, suggest hybrid models combining the strengths of both models. Our contribution is to train such hybrid models using an auxiliary loss function that controls which information is captured by the latent variables and what is left to the autoregressive decoder. In contrast, prior work on such hybrid models needed to limit the capacity of the autoregressive decoder to prevent degenerate models that ignore the latent variables and only rely on autoregressive modeling. Our approach results in models with meaningful latent variable representations, and which rely on powerful autoregressive decoders to model image details. Our model generates qualitatively convincing samples, and yields state-of-the-art quantitative results.

READ FULL TEXT

page 2

page 6

page 7

page 8

research
08/26/2019

PixelVAE++: Improved PixelVAE with Discrete Prior

Constructing powerful generative models for natural images is a challeng...
research
12/24/2016

PixelCNN Models with Auxiliary Variables for Natural Image Modeling

We study probabilistic models of natural images and extend the autoregre...
research
05/02/2023

Shared Latent Space by Both Languages in Non-Autoregressive Neural Machine Translation

Latent variable modeling in non-autoregressive neural machine translatio...
research
11/15/2015

Mixtures of Sparse Autoregressive Networks

We consider high-dimensional distribution estimation through autoregress...
research
02/22/2022

Benchmarking Generative Latent Variable Models for Speech

Stochastic latent variable models (LVMs) achieve state-of-the-art perfor...
research
03/12/2023

Raising The Limit Of Image Rescaling Using Auxiliary Encoding

Normalizing flow models using invertible neural networks (INN) have been...
research
05/12/2022

Towards Robust Unsupervised Disentanglement of Sequential Data – A Case Study Using Music Audio

Disentangled sequential autoencoders (DSAEs) represent a class of probab...

Please sign up or login with your details

Forgot password? Click here to reset