Quality of Uncertainty Quantification for Bayesian Neural Network Inference

06/24/2019
by   Jiayu Yao, et al.
8

Bayesian Neural Networks (BNNs) place priors over the parameters in a neural network. Inference in BNNs, however, is difficult; all inference methods for BNNs are approximate. In this work, we empirically compare the quality of predictive uncertainty estimates for 10 common inference methods on both regression and classification tasks. Our experiments demonstrate that commonly used metrics (e.g. test log-likelihood) can be misleading. Our experiments also indicate that inference innovations designed to capture structure in the posterior do not necessarily produce high quality posterior approximations.

READ FULL TEXT

page 4

page 8

page 9

research
02/13/2023

Reliable Bayesian Inference in Misspecified Models

We provide a general solution to a fundamental open problem in Bayesian ...
research
02/06/2020

How Good is the Bayes Posterior in Deep Neural Networks Really?

During the past five years the Bayesian deep learning community has deve...
research
02/05/2018

Bayesian Coreset Construction via Greedy Iterative Geodesic Ascent

Coherent uncertainty quantification is a key strength of Bayesian method...
research
04/29/2022

Tractable Uncertainty for Structure Learning

Bayesian structure learning allows one to capture uncertainty over the c...
research
07/31/2020

Cold Posteriors and Aleatoric Uncertainty

Recent work has observed that one can outperform exact inference in Baye...
research
10/16/2022

Posterior Regularized Bayesian Neural Network Incorporating Soft and Hard Knowledge Constraints

Neural Networks (NNs) have been widely used in supervised learning due t...
research
03/07/2022

Prior-informed Uncertainty Modelling with Bayesian Polynomial Approximations

Orthogonal polynomial approximations form the foundation to a set of wel...

Please sign up or login with your details

Forgot password? Click here to reset