Model Selection for Bayesian Autoencoders

06/11/2021
by   Ba-Hien Tran, et al.
14

We develop a novel method for carrying out model selection for Bayesian autoencoders (BAEs) by means of prior hyper-parameter optimization. Inspired by the common practice of type-II maximum likelihood optimization and its equivalence to Kullback-Leibler divergence minimization, we propose to optimize the distributional sliced-Wasserstein distance (DSWD) between the output of the autoencoder and the empirical data distribution. The advantages of this formulation are that we can estimate the DSWD based on samples and handle high-dimensional problems. We carry out posterior estimation of the BAE parameters via stochastic gradient Hamiltonian Monte Carlo and turn our BAE into a generative model by fitting a flexible Dirichlet mixture model in the latent space. Consequently, we obtain a powerful alternative to variational autoencoders, which are the preferred choice in modern applications of autoencoders for representation learning with uncertainty. We evaluate our approach qualitatively and quantitatively using a vast experimental campaign on a number of unsupervised learning tasks and show that, in small-data regimes where priors matter, our approach provides state-of-the-art results, outperforming multiple competitive baselines.

READ FULL TEXT

page 3

page 7

page 8

page 32

page 33

page 34

page 35

page 36

research
02/09/2023

Fully Bayesian Autoencoders with Latent Sparse Gaussian Processes

Autoencoders and their variants are among the most widely used models in...
research
10/02/2018

Sinkhorn AutoEncoders

Optimal Transport offers an alternative to maximum likelihood for learni...
research
06/30/2022

Laplacian Autoencoders for Learning Stochastic Representations

Established methods for unsupervised representation learning such as var...
research
12/11/2019

Bayesian Variational Autoencoders for Unsupervised Out-of-Distribution Detection

Despite their successes, deep neural networks still make unreliable pred...
research
02/22/2020

Amortised Learning by Wake-Sleep

Models that employ latent variables to capture structure in observed dat...
research
08/31/2020

LaDDer: Latent Data Distribution Modelling with a Generative Prior

In this paper, we show that the performance of a learnt generative model...
research
06/19/2016

Tutorial on Variational Autoencoders

In just three years, Variational Autoencoders (VAEs) have emerged as one...

Please sign up or login with your details

Forgot password? Click here to reset