Reconsidering Analytical Variational Bounds for Output Layers of Deep Networks

10/02/2019
by   Otmane Sakhi, et al.
0

The combination of the re-parameterization trick with the use of variational auto-encoders has caused a sensation in Bayesian deep learning, allowing the training of realistic generative models of images and has considerably increased our ability to use scalable latent variable models. The re-parameterization trick is necessary for models in which no analytical variational bound is available and allows noisy gradients to be computed for arbitrary models. However, for certain standard output layers of a neural network, analytical bounds are available and the variational auto-encoder may be used both without the re-parameterization trick or the need for any Monte Carlo approximation. In this work, we show that using Jaakola and Jordan bound, we can produce a binary classification layer that allows a Bayesian output layer to be trained, using the standard stochastic gradient descent algorithm. We further demonstrate that a latent variable model utilizing the Bouchard bound for multi-class classification allows for fast training of a fully probabilistic latent factor model, even when the number of classes is very large.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/24/2019

Latent Variable Session-Based Recommendation

Session based recommendation provides an attractive alternative to the t...
research
07/01/2019

Model Comparison of Dark Energy models Using Deep Network

This work uses the combination of the variational auto-encoder and the g...
research
05/29/2018

Hamiltonian Variational Auto-Encoder

Variational Auto-Encoders (VAEs) have become very popular techniques to ...
research
06/07/2019

Importance Weighted Adversarial Variational Autoencoders for Spike Inference from Calcium Imaging Data

The Importance Weighted Auto Encoder (IWAE) objective has been shown to ...
research
04/24/2023

Variational Diffusion Auto-encoder: Deep Latent Variable Model with Unconditional Diffusion Prior

Variational auto-encoders (VAEs) are one of the most popular approaches ...
research
06/21/2023

Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference

We view large language models (LLMs) as stochastic language layers in a ...
research
10/12/2022

Alpha-divergence Variational Inference Meets Importance Weighted Auto-Encoders: Methodology and Asymptotics

Several algorithms involving the Variational Rényi (VR) bound have been ...

Please sign up or login with your details

Forgot password? Click here to reset