Fixing Asymptotic Uncertainty of Bayesian Neural Networks with Infinite ReLU Features

10/06/2020
by   Agustinus Kristiadi, et al.
2

Approximate Bayesian methods can mitigate overconfidence in ReLU networks. However, far away from the training data, even Bayesian neural networks (BNNs) can still underestimate uncertainty and thus be overconfident. We suggest to fix this by considering an infinite number of ReLU features over the input domain that are never part of the training process and thus remain at prior values. Perhaps surprisingly, we show that this model leads to a tractable Gaussian process (GP) term that can be added to a pre-trained BNN's posterior at test time with negligible cost overhead. The BNN then yields structured uncertainty in the proximity of training data, while the GP prior calibrates uncertainty far away from them. As a key contribution, we prove that the added uncertainty yields cubic predictive variance growth, and thus the ideal uniform (maximum entropy) confidence in multi-class classification far from the training data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/13/2018

Why ReLU networks yield high-confidence predictions far away from the training data and how to mitigate the problem

Classifiers used in the wild, in particular for safety-critical systems,...
research
02/24/2020

Being Bayesian, Even Just a Bit, Fixes Overconfidence in ReLU Networks

The point estimates of ReLU classification networks—arguably the most wi...
research
09/26/2019

Towards neural networks that provably know when they don't know

It has recently been shown that ReLU networks produce arbitrarily over-c...
research
04/30/2022

Deep Ensemble as a Gaussian Process Approximate Posterior

Deep Ensemble (DE) is an effective alternative to Bayesian neural networ...
research
11/17/2021

Do Not Trust Prediction Scores for Membership Inference Attacks

Membership inference attacks (MIAs) aim to determine whether a specific ...
research
10/14/2020

Exploring the Uncertainty Properties of Neural Networks' Implicit Priors in the Infinite-Width Limit

Modern deep learning models have achieved great success in predictive ac...
research
08/19/2020

Neural Networks and Quantum Field Theory

We propose a theoretical understanding of neural networks in terms of Wi...

Please sign up or login with your details

Forgot password? Click here to reset