DeepAI AI Chat
Log In Sign Up

Fixing Asymptotic Uncertainty of Bayesian Neural Networks with Infinite ReLU Features

10/06/2020
by   Agustinus Kristiadi, et al.
2

Approximate Bayesian methods can mitigate overconfidence in ReLU networks. However, far away from the training data, even Bayesian neural networks (BNNs) can still underestimate uncertainty and thus be overconfident. We suggest to fix this by considering an infinite number of ReLU features over the input domain that are never part of the training process and thus remain at prior values. Perhaps surprisingly, we show that this model leads to a tractable Gaussian process (GP) term that can be added to a pre-trained BNN's posterior at test time with negligible cost overhead. The BNN then yields structured uncertainty in the proximity of training data, while the GP prior calibrates uncertainty far away from them. As a key contribution, we prove that the added uncertainty yields cubic predictive variance growth, and thus the ideal uniform (maximum entropy) confidence in multi-class classification far from the training data.

READ FULL TEXT

page 1

page 2

page 3

page 4

02/24/2020

Being Bayesian, Even Just a Bit, Fixes Overconfidence in ReLU Networks

The point estimates of ReLU classification networks—arguably the most wi...
09/26/2019

Towards neural networks that provably know when they don't know

It has recently been shown that ReLU networks produce arbitrarily over-c...
04/30/2022

Deep Ensemble as a Gaussian Process Approximate Posterior

Deep Ensemble (DE) is an effective alternative to Bayesian neural networ...
11/17/2021

Do Not Trust Prediction Scores for Membership Inference Attacks

Membership inference attacks (MIAs) aim to determine whether a specific ...
11/05/2021

Mixtures of Laplace Approximations for Improved Post-Hoc Uncertainty in Deep Learning

Deep neural networks are prone to overconfident predictions on outliers....
08/19/2020

Neural Networks and Quantum Field Theory

We propose a theoretical understanding of neural networks in terms of Wi...