DeepAI AI Chat
Log In Sign Up

Variational Laplace for Bayesian neural networks

by   Ali Unlu, et al.

We develop variational Laplace for Bayesian neural networks (BNNs) which exploits a local approximation of the curvature of the likelihood to estimate the ELBO without the need for stochastic sampling of the neural-network weights. Variational Laplace performs better on image classification tasks than MAP inference and far better than standard variational inference with stochastic sampling despite using the same mean-field Gaussian approximate posterior. The Variational Laplace objective is simple to evaluate, as it is (in essence) the log-likelihood, plus weight-decay, plus a squared-gradient regularizer. Finally, we emphasise care needed in benchmarking standard VI as there is a risk of stopping before the variance parameters have converged. We show that early-stopping can be avoided by increasing the learning rate for the variance parameters.


page 1

page 2

page 3

page 4


Gradient Regularisation as Approximate Variational Inference

Variational inference in Bayesian neural networks is usually performed u...

Mixed Variational Inference

The Laplace approximation has been one of the workhorses of Bayesian inf...

'In-Between' Uncertainty in Bayesian Neural Networks

We describe a limitation in the expressiveness of the predictive uncerta...

Online Laplace Model Selection Revisited

The Laplace approximation provides a closed-form model selection objecti...

Riemannian Laplace approximations for Bayesian neural networks

Bayesian neural networks often approximate the weight-posterior with a G...

Challenges and Pitfalls of Bayesian Unlearning

Machine unlearning refers to the task of removing a subset of training d...

Curvature-Sensitive Predictive Coding with Approximate Laplace Monte Carlo

Predictive coding (PC) accounts of perception now form one of the domina...