Log In Sign Up

Closing the gap: Exact maximum likelihood training of generative autoencoders using invertible layers

by   Gianluigi Silvestri, et al.

In this work, we provide an exact likelihood alternative to the variational training of generative autoencoders. We show that VAE-style autoencoders can be constructed using invertible layers, which offer a tractable exact likelihood without the need for any regularization terms. This is achieved while leaving complete freedom in the choice of encoder, decoder and prior architectures, making our approach a drop-in replacement for the training of existing VAEs and VAE-style models. We refer to the resulting models as Autoencoders within Flows (AEF), since the encoder, decoder and prior are defined as individual layers of an overall invertible architecture. We show that the approach results in strikingly higher performance than architecturally equivalent VAEs in term of log-likelihood, sample quality and denoising performance. In a broad sense, the main ambition of this work is to close the gap between the normalizing flow and autoencoder literature under the common framework of invertibility and exact maximum likelihood.


page 7

page 18

page 19

page 20

page 21

page 23

page 24

page 25


Deep Directed Generative Autoencoders

For discrete data, the likelihood P(x) can be rewritten exactly and para...

TzK Flow - Conditional Generative Model

We introduce TzK (pronounced "task"), a conditional flow-based encoder/d...

Ladder Variational Autoencoders

Variational Autoencoders are powerful models for unsupervised learning. ...

Generative Reversible Networks

Generative models with an encoding component such as autoencoders curren...

Exact Rate-Distortion in Autoencoders via Echo Noise

Compression is at the heart of effective representation learning. Howeve...

Distributed Evolution of Deep Autoencoders

Autoencoders have seen wide success in domains ranging from feature sele...

A Generalised Linear Model Framework for Variational Autoencoders based on Exponential Dispersion Families

Although variational autoencoders (VAE) are successfully used to obtain ...