Explaining Bayesian Neural Networks

08/23/2021
by   Kirill Bykov, et al.
17

To make advanced learning machines such as Deep Neural Networks (DNNs) more transparent in decision making, explainable AI (XAI) aims to provide interpretations of DNNs' predictions. These interpretations are usually given in the form of heatmaps, each one illustrating relevant patterns regarding the prediction for a given instance. Bayesian approaches such as Bayesian Neural Networks (BNNs) so far have a limited form of transparency (model transparency) already built-in through their prior weight distribution, but notably, they lack explanations of their predictions for given instances. In this work, we bring together these two perspectives of transparency into a holistic explanation framework for explaining BNNs. Within the Bayesian framework, the network weights follow a probability distribution. Hence, the standard (deterministic) prediction strategy of DNNs extends in BNNs to a predictive distribution, and thus the standard explanation extends to an explanation distribution. Exploiting this view, we uncover that BNNs implicitly employ multiple heterogeneous prediction strategies. While some of these are inherited from standard DNNs, others are revealed to us by considering the inherent uncertainty in BNNs. Our quantitative and qualitative experiments on toy/benchmark data and real-world data from pathology show that the proposed approach of explaining BNNs can lead to more effective and insightful explanations.

READ FULL TEXT

page 2

page 4

page 5

page 8

page 9

page 10

page 13

page 14

research
06/16/2020

How Much Can I Trust You? – Quantifying Uncertainties in Explaining Neural Networks

Explainable AI (XAI) aims to provide interpretations for predictions mad...
research
01/10/2022

Effective Representation to Capture Collaboration Behaviors between Explainer and User

An explainable AI (XAI) model aims to provide transparency (in the form ...
research
03/20/2018

Explanation Methods in Deep Learning: Users, Values, Concerns and Challenges

Issues regarding explainable AI involve four components: users, laws & r...
research
04/30/2022

Explainable Artificial Intelligence for Bayesian Neural Networks: Towards trustworthy predictions of ocean dynamics

The trustworthiness of neural networks is often challenged because they ...
research
09/13/2020

Deep Transparent Prediction through Latent Representation Analysis

The paper presents a novel deep learning approach, which extracts latent...
research
07/13/2021

Scalable, Axiomatic Explanations of Deep Alzheimer's Diagnosis from Heterogeneous Data

Deep Neural Networks (DNNs) have an enormous potential to learn from com...
research
04/13/2017

Explaining the Unexplained: A CLass-Enhanced Attentive Response (CLEAR) Approach to Understanding Deep Neural Networks

In this work, we propose CLass-Enhanced Attentive Response (CLEAR): an a...

Please sign up or login with your details

Forgot password? Click here to reset