Learning and Inference in Imaginary Noise Models

05/18/2020
by   Saeed Saremi, et al.
4

Inspired by recent developments in learning smoothed densities with empirical Bayes, we study variational autoencoders with a decoder that is tailored for the random variable Y=X+N(0,σ^2 I_d). A notion of smoothed variational inference emerges where the smoothing is implicitly enforced by the noise model of the decoder; "implicit" since during training the encoder only sees clean samples. This is the concept of imaginary noise model where the noise model dictates the functional form of the variational lower bound L(σ), but the noisy data are never seen during training. The model is named σ-VAE. We prove that all σ-VAEs are equivalent to each other via a simple β-VAE expansion: L(σ_2) ≡L(σ_1,β), where β=σ_2^2/σ_1^2. We prove a similar result for the Laplace distribution in the exponential family. Empirically, we report an intriguing power law D_ KL∝ 1/σ for the trained models and we study the inference in the σ-VAE for unseen noisy data. The experiments are performed on MNIST, where we show that quite remarkably the model can make reasonable inferences on extremely noisy samples even though it has not seen any during training. The vanilla VAE completely breaks down in this regime. We finish with a hypothesis (the XYZ hypothesis) on the findings here.

READ FULL TEXT

page 1

page 2

page 3

page 4

page 6

page 8

page 9

page 13

research
07/29/2020

Unnormalized Variational Bayes

We unify empirical Bayes and variational Bayes for approximating unnorma...
research
01/31/2020

On Implicit Regularization in β-VAEs

While the impact of variational inference (VI) on posterior inference in...
research
08/30/2021

An Introduction to Variational Inference

Approximating complex probability densities is a core problem in modern ...
research
07/06/2018

A Variational Time Series Feature Extractor for Action Prediction

We propose a Variational Time Series Feature Extractor (VTSFE), inspired...
research
07/23/2019

Noise Contrastive Variational Autoencoders

We take steps towards understanding the "posterior collapse (PC)" diffic...
research
06/11/2020

A Generalised Linear Model Framework for Variational Autoencoders based on Exponential Dispersion Families

Although variational autoencoders (VAE) are successfully used to obtain ...
research
04/21/2022

Learn from Unpaired Data for Image Restoration: A Variational Bayes Approach

Collecting paired training data is difficult in practice, but the unpair...

Please sign up or login with your details

Forgot password? Click here to reset