Gradient-based Uncertainty Attribution for Explainable Bayesian Deep Learning

04/10/2023
by   Hanjing Wang, et al.
10

Predictions made by deep learning models are prone to data perturbations, adversarial attacks, and out-of-distribution inputs. To build a trusted AI system, it is therefore critical to accurately quantify the prediction uncertainties. While current efforts focus on improving uncertainty quantification accuracy and efficiency, there is a need to identify uncertainty sources and take actions to mitigate their effects on predictions. Therefore, we propose to develop explainable and actionable Bayesian deep learning methods to not only perform accurate uncertainty quantification but also explain the uncertainties, identify their sources, and propose strategies to mitigate the uncertainty impacts. Specifically, we introduce a gradient-based uncertainty attribution method to identify the most problematic regions of the input that contribute to the prediction uncertainty. Compared to existing methods, the proposed UA-Backprop has competitive accuracy, relaxed assumptions, and high efficiency. Moreover, we propose an uncertainty mitigation strategy that leverages the attribution results as attention to further improve the model performance. Both qualitative and quantitative evaluations are conducted to demonstrate the effectiveness of our proposed methods.

READ FULL TEXT

page 3

page 6

page 7

page 14

page 19

research
09/19/2023

Adversarial Attacks Against Uncertainty Quantification

Machine-learning models can be fooled by adversarial examples, i.e., car...
research
03/22/2021

Interpreting Deep Learning Models with Marginal Attribution by Conditioning on Quantiles

A vastly growing literature on explaining deep learning models has emerg...
research
02/09/2021

STUaNet: Understanding uncertainty in spatiotemporal collective human mobility

The high dynamics and heterogeneous interactions in the complicated urba...
research
12/14/2022

Post-hoc Uncertainty Learning using a Dirichlet Meta-Model

It is known that neural networks have the problem of being over-confiden...
research
04/13/2023

Neural State-Space Models: Empirical Evaluation of Uncertainty Quantification

Effective quantification of uncertainty is an essential and still missin...
research
02/23/2023

Uncertainty Injection: A Deep Learning Method for Robust Optimization

This paper proposes a paradigm of uncertainty injection for training dee...
research
08/06/2023

Building Safe and Reliable AI systems for Safety Critical Tasks with Vision-Language Processing

Although AI systems have been applied in various fields and achieved imp...

Please sign up or login with your details

Forgot password? Click here to reset