Closing the gap: Exact maximum likelihood training of generative autoencoders using invertible layers

05/19/2022
by   Gianluigi Silvestri, et al.
14

In this work, we provide an exact likelihood alternative to the variational training of generative autoencoders. We show that VAE-style autoencoders can be constructed using invertible layers, which offer a tractable exact likelihood without the need for any regularization terms. This is achieved while leaving complete freedom in the choice of encoder, decoder and prior architectures, making our approach a drop-in replacement for the training of existing VAEs and VAE-style models. We refer to the resulting models as Autoencoders within Flows (AEF), since the encoder, decoder and prior are defined as individual layers of an overall invertible architecture. We show that the approach results in strikingly higher performance than architecturally equivalent VAEs in term of log-likelihood, sample quality and denoising performance. In a broad sense, the main ambition of this work is to close the gap between the normalizing flow and autoencoder literature under the common framework of invertibility and exact maximum likelihood.

READ FULL TEXT

page 7

page 18

page 19

page 20

page 21

page 23

page 24

page 25

research
06/02/2023

Maximum Likelihood Training of Autoencoders

Maximum likelihood training has favorable statistical properties and is ...
research
11/05/2018

TzK Flow - Conditional Generative Model

We introduce TzK (pronounced "task"), a conditional flow-based encoder/d...
research
10/02/2014

Deep Directed Generative Autoencoders

For discrete data, the likelihood P(x) can be rewritten exactly and para...
research
06/05/2018

Generative Reversible Networks

Generative models with an encoding component such as autoencoders curren...
research
01/23/2019

Loss Landscapes of Regularized Linear Autoencoders

Autoencoders are a deep learning model for representation learning. When...
research
06/11/2020

A Generalised Linear Model Framework for Variational Autoencoders based on Exponential Dispersion Families

Although variational autoencoders (VAE) are successfully used to obtain ...
research
06/06/2022

Embrace the Gap: VAEs Perform Independent Mechanism Analysis

Variational autoencoders (VAEs) are a popular framework for modeling com...

Please sign up or login with your details

Forgot password? Click here to reset