Disentangling the Gauss-Newton Method and Approximate Inference for Neural Networks

07/21/2020
by   Alexander Immer, et al.
0

In this thesis, we disentangle the generalized Gauss-Newton and approximate inference for Bayesian deep learning. The generalized Gauss-Newton method is an optimization method that is used in several popular Bayesian deep learning algorithms. Algorithms that combine the Gauss-Newton method with the Laplace and Gaussian variational approximation have recently led to state-of-the-art results in Bayesian deep learning. While the Laplace and Gaussian variational approximation have been studied extensively, their interplay with the Gauss-Newton method remains unclear. Recent criticism of priors and posterior approximations in Bayesian deep learning further urges the need for a deeper understanding of practical algorithms. The individual analysis of the Gauss-Newton method and Laplace and Gaussian variational approximations for neural networks provides both theoretical insight and new practical algorithms. We find that the Gauss-Newton method simplifies the underlying probabilistic model significantly. In particular, the combination of the Gauss-Newton method with approximate inference can be cast as inference in a linear or Gaussian process model. The Laplace and Gaussian variational approximation can subsequently provide a posterior approximation to these simplified models. This new disentangled understanding of recent Bayesian deep learning algorithms also leads to new methods: first, the connection to Gaussian processes enables new function-space inference algorithms. Second, we present a marginal likelihood approximation of the underlying probabilistic model to tune neural network hyperparameters. Finally, the identified underlying models lead to different methods to compute predictive distributions. In fact, we find that these prediction methods for Bayesian neural networks often work better than the default choice and solve a common issue with the Laplace approximation.

READ FULL TEXT

page 11

page 13

page 14

page 16

page 19

page 22

page 29

page 32

research
02/24/2023

Variational Linearized Laplace Approximation for Bayesian Deep Learning

Pre-trained deep neural networks can be adapted to perform uncertainty e...
research
11/02/2021

Bayes-Newton Methods for Approximate Bayesian Inference with PSD Guarantees

We formulate natural gradient variational inference (VI), expectation pr...
research
08/19/2020

Improving predictions of Bayesian neural networks via local linearization

In this paper we argue that in Bayesian deep learning, the frequently ut...
research
06/12/2023

Riemannian Laplace approximations for Bayesian neural networks

Bayesian neural networks often approximate the weight-posterior with a G...
research
04/30/2022

NeuralEF: Deconstructing Kernels by Deep Neural Networks

Learning the principal eigenfunctions of an integral operator defined by...
research
07/09/2021

The Bayesian Learning Rule

We show that many machine-learning algorithms are specific instances of ...
research
10/23/2022

Accelerated Linearized Laplace Approximation for Bayesian Deep Learning

Laplace approximation (LA) and its linearized variant (LLA) enable effor...

Please sign up or login with your details

Forgot password? Click here to reset