Variational Linearized Laplace Approximation for Bayesian Deep Learning

02/24/2023
by   Luis A. Ortega, et al.
0

Pre-trained deep neural networks can be adapted to perform uncertainty estimation by transforming them into Bayesian neural networks via methods such as Laplace approximation (LA) or its linearized form (LLA), among others. To make these methods more tractable, the generalized Gauss-Newton (GGN) approximation is often used. However, due to complex inefficiency difficulties, both LA and LLA rely on further approximations, such as Kronecker-factored or diagonal approximate GGN matrices, which can affect the results. To address these issues, we propose a new method for scaling LLA using a variational sparse Gaussian Process (GP) approximation based on the dual RKHS of GPs. Our method retains the predictive mean of the original model while allowing for efficient stochastic optimization and scalability in both the number of parameters and the size of the training dataset. Moreover, its training cost is independent of the number of training points, improving over previously existing methods. Our preliminary experiments indicate that it outperforms already existing efficient variants of LLA, such as accelerated LLA (ELLA), based on the Nyström approximation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/21/2020

Disentangling the Gauss-Newton Method and Approximate Inference for Neural Networks

In this thesis, we disentangle the generalized Gauss-Newton and approxim...
research
10/23/2022

Accelerated Linearized Laplace Approximation for Bayesian Deep Learning

Laplace approximation (LA) and its linearized variant (LLA) enable effor...
research
03/06/2017

Multiplicative Normalizing Flows for Variational Bayesian Neural Networks

We reinterpret multiplicative noise in neural networks as auxiliary rand...
research
08/19/2020

Improving predictions of Bayesian neural networks via local linearization

In this paper we argue that in Bayesian deep learning, the frequently ut...
research
07/09/2021

L2M: Practical posterior Laplace approximation with optimization-driven second moment estimation

Uncertainty quantification for deep neural networks has recently evolved...
research
04/20/2020

Tractable Approximate Gaussian Inference for Bayesian Neural Networks

In this paper, we propose an analytical method allowing for tractable ap...
research
06/28/2021

Laplace Redux – Effortless Bayesian Deep Learning

Bayesian formulations of deep learning have been shown to have compellin...

Please sign up or login with your details

Forgot password? Click here to reset