Sampling-based Bayesian Inference with gradient uncertainty

12/08/2018
by   Chanwoo Park, et al.
0

Deep neural networks(NNs) have achieved impressive performance, often exceed human performance on many computer vision tasks. However, one of the most challenging issues that still remains is that NNs are overconfident in their predictions, which can be very harmful when this arises in safety critical applications. In this paper, we show that predictive uncertainty can be efficiently estimated when we incorporate the concept of gradients uncertainty into posterior sampling. The proposed method is tested on two different datasets, MNIST for in-distribution confusing examples and notMNIST for out-of-distribution data. We show that our method is able to efficiently represent predictive uncertainty on both datasets.

READ FULL TEXT

page 3

page 4

research
04/10/2020

Model Uncertainty Quantification for Reliable Deep Vision Structural Health Monitoring

Computer vision leveraging deep learning has achieved significant succes...
research
11/29/2018

Uncertainty propagation in neural networks for sparse coding

A novel method to propagate uncertainty through the soft-thresholding no...
research
09/16/2021

Improving Regression Uncertainty Estimation Under Statistical Change

While deep neural networks are highly performant and successful in a wid...
research
03/04/2023

Calibrating Transformers via Sparse Gaussian Processes

Transformer models have achieved profound success in prediction tasks in...
research
10/07/2019

Deep Evidential Regression

Deterministic neural networks (NNs) are increasingly being deployed in s...
research
09/05/2023

Sparse Function-space Representation of Neural Networks

Deep neural networks (NNs) are known to lack uncertainty estimates and s...
research
08/03/2023

Quantification of Predictive Uncertainty via Inference-Time Sampling

Predictive variability due to data ambiguities has typically been addres...

Please sign up or login with your details

Forgot password? Click here to reset