REVEL Framework to measure Local Linear Explanations for black-box models: Deep Learning Image Classification case of study

11/11/2022
by   Iván Sevillano-García, et al.
0

Explainable artificial intelligence is proposed to provide explanations for reasoning performed by an Artificial Intelligence. There is no consensus on how to evaluate the quality of these explanations, since even the definition of explanation itself is not clear in the literature. In particular, for the widely known Local Linear Explanations, there are qualitative proposals for the evaluation of explanations, although they suffer from theoretical inconsistencies. The case of image is even more problematic, where a visual explanation seems to explain a decision while detecting edges is what it really does. There are a large number of metrics in the literature specialized in quantitatively measuring different qualitative aspects so we should be able to develop metrics capable of measuring in a robust and correct way the desirable aspects of the explanations. In this paper, we propose a procedure called REVEL to evaluate different aspects concerning the quality of explanations with a theoretically coherent development. This procedure has several advances in the state of the art: it standardizes the concepts of explanation and develops a series of metrics not only to be able to compare between them but also to obtain absolute information regarding the explanation itself. The experiments have been carried out on image four datasets as benchmark where we show REVEL's descriptive and analytical power.

READ FULL TEXT

page 12

page 13

page 14

page 15

page 16

page 18

page 19

research
06/01/2021

To trust or not to trust an explanation: using LEAF to evaluate local linear XAI methods

The main objective of eXplainable Artificial Intelligence (XAI) is to pr...
research
09/19/2019

Highlighting Bias with Explainable Neural-Symbolic Visual Reasoning

Many high-performance models suffer from a lack of interpretability. The...
research
10/07/2022

Quantitative Metrics for Evaluating Explanations of Video DeepFake Detectors

The proliferation of DeepFake technology is a rising challenge in today'...
research
09/13/2021

Towards Better Model Understanding with Path-Sufficient Explanations

Feature based local attribution methods are amongst the most prevalent i...
research
05/14/2020

Evolved Explainable Classifications for Lymph Node Metastases

A novel evolutionary approach for Explainable Artificial Intelligence is...
research
10/19/2021

Coalitional Bayesian Autoencoders – Towards explainable unsupervised deep learning

This paper aims to improve the explainability of Autoencoder's (AE) pred...
research
01/16/2022

DeepCreativity: Measuring Creativity with Deep Learning Techniques

Measuring machine creativity is one of the most fascinating challenges i...

Please sign up or login with your details

Forgot password? Click here to reset