DeepAI AI Chat
Log In Sign Up

A Generalised Linear Model Framework for Variational Autoencoders based on Exponential Dispersion Families

by   Robert Sicks, et al.

Although variational autoencoders (VAE) are successfully used to obtain meaningful low-dimensional representations for high-dimensional data, aspects of their loss function are not yet fully understood. We introduce a theoretical framework that is based on a connection between VAE and generalized linear models (GLM). The equality between the activation function of a VAE and the inverse of the link function of a GLM enables us to provide a systematic generalization of the loss analysis for VAE based on the assumption that the distribution of the decoder belongs to an exponential dispersion family (EDF). As a further result, we can initialize VAE nets by maximum likelihood estimates (MLE) that enhance the training performance on both synthetic and real world data sets.


page 19

page 20

page 21

page 22

page 23

page 25

page 26

page 27


Variational autoencoders in the presence of low-dimensional data: landscape and implicit bias

Variational Autoencoders (VAEs) are one of the most commonly used genera...

Variational Information Bottleneck on Vector Quantized Autoencoders

In this paper, we provide an information-theoretic interpretation of the...

The continuous Bernoulli: fixing a pervasive error in variational autoencoders

Variational autoencoders (VAE) have quickly become a central tool in mac...

Reproducible, incremental representation learning with Rosetta VAE

Variational autoencoders are among the most popular methods for distilli...

Learning Manifold Dimensions with Conditional Variational Autoencoders

Although the variational autoencoder (VAE) and its conditional extension...

PIE: Pseudo-Invertible Encoder

We consider the problem of information compression from high dimensional...

Learning and Inference in Imaginary Noise Models

Inspired by recent developments in learning smoothed densities with empi...