Online Laplace Model Selection Revisited

07/12/2023
by   Jihao Andreas Lin, et al.
0

The Laplace approximation provides a closed-form model selection objective for neural networks (NN). Online variants, which optimise NN parameters jointly with hyperparameters, like weight decay strength, have seen renewed interest in the Bayesian deep learning community. However, these methods violate Laplace's method's critical assumption that the approximation is performed around a mode of the loss, calling into question their soundness. This work re-derives online Laplace methods, showing them to target a variational bound on a mode-corrected variant of the Laplace evidence which does not make stationarity assumptions. Online Laplace and its mode-corrected counterpart share stationary points where 1. the NN parameters are a maximum a posteriori, satisfying the Laplace method's assumption, and 2. the hyperparameters maximise the Laplace evidence, motivating online methods. We demonstrate that these optima are roughly attained in practise by online algorithms using full-batch gradient descent on UCI regression datasets. The optimised hyperparameters prevent overfitting and outperform validation-based early stopping.

READ FULL TEXT
research
06/17/2022

Adapting the Linearised Laplace Model Evidence for Modern Deep Learning

The linearised Laplace method for estimating model uncertainty has recei...
research
04/08/2019

Bayesian Neural Networks at Finite Temperature

We recapitulate the Bayesian formulation of neural network based classif...
research
02/27/2021

Variational Laplace for Bayesian neural networks

We develop variational Laplace for Bayesian neural networks (BNNs) which...
research
06/06/2023

Stochastic Marginal Likelihood Gradients using Neural Tangent Kernels

Selecting hyperparameters in deep learning greatly impacts its effective...
research
05/20/2018

Online Structured Laplace Approximations For Overcoming Catastrophic Forgetting

We introduce the Kronecker factored online Laplace approximation for ove...
research
06/28/2021

Laplace Redux – Effortless Bayesian Deep Learning

Bayesian formulations of deep learning have been shown to have compellin...
research
09/22/2015

Modifying iterated Laplace approximations

In this paper, several modifications are introduced to the functional ap...

Please sign up or login with your details

Forgot password? Click here to reset