Coalitional Bayesian Autoencoders – Towards explainable unsupervised deep learning

10/19/2021
by   Bang Xiang Yong, et al.
0

This paper aims to improve the explainability of Autoencoder's (AE) predictions by proposing two explanation methods based on the mean and epistemic uncertainty of log-likelihood estimate, which naturally arise from the probabilistic formulation of the AE called Bayesian Autoencoders (BAE). To quantitatively evaluate the performance of explanation methods, we test them in sensor network applications, and propose three metrics based on covariate shift of sensors : (1) G-mean of Spearman drift coefficients, (2) G-mean of sensitivity-specificity of explanation ranking and (3) sensor explanation quality index (SEQI) which combines the two aforementioned metrics. Surprisingly, we find that explanations of BAE's predictions suffer from high correlation resulting in misleading explanations. To alleviate this, a "Coalitional BAE" is proposed, which is inspired by agent-based system theory. Our comprehensive experiments on publicly available condition monitoring datasets demonstrate the improved quality of explanations using the Coalitional BAE.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/15/2017

Embedding Deep Networks into Visual Explanations

In this paper, we propose a novel explanation module to explain the pred...
research
01/27/2019

How Sensitive are Sensitivity-Based Explanations?

We propose a simple objective evaluation measure for explanations of a c...
research
11/11/2022

REVEL Framework to measure Local Linear Explanations for black-box models: Deep Learning Image Classification case of study

Explainable artificial intelligence is proposed to provide explanations ...
research
05/15/2022

Fairness via Explanation Quality: Evaluating Disparities in the Quality of Post hoc Explanations

As post hoc explanation methods are increasingly being leveraged to expl...
research
07/28/2021

Bayesian Autoencoders for Drift Detection in Industrial Environments

Autoencoders are unsupervised models which have been used for detecting ...
research
06/16/2020

How Much Can I Trust You? – Quantifying Uncertainties in Explaining Neural Networks

Explainable AI (XAI) aims to provide interpretations for predictions mad...
research
02/06/2013

Defining Explanation in Probabilistic Systems

As probabilistic systems gain popularity and are coming into wider use, ...

Please sign up or login with your details

Forgot password? Click here to reset