Representativity and Consistency Measures for Deep Neural Network Explanations

09/07/2020
by   Thomas Fel, et al.
0

The adoption of machine learning in critical contexts requires a reliable explanation of why the algorithm makes certain predictions. To address this issue, many methods have been proposed to explain the predictions of these black box models. Despite the choice of those many methods, little effort has been made to ensure that the explanations produced are objectively relevant. While it is possible to establish a number of desirable properties of a good explanation, it is more difficult to evaluate them. As a result, no measures are actually associated with the properties of consistency and generalization of explanations. We are introducing a new procedure to compute two new measures, Relative Consistency ReCo and Mean Generalization M eGe, respectively for consistency and generalization of explanations. Our results on several image classification datasets using progressively degraded models allow us to validate empirically the reliability of those measures. We compare the results obtained with those of existing measures. Finally we demonstrate the potential of the measures by applying them to different families of models, revealing an interesting link between gradient-based explanations methods and 1-Lipschitz networks.

READ FULL TEXT

page 5

page 8

research
02/01/2022

Framework for Evaluating Faithfulness of Local Explanations

We study the faithfulness of an explanation system to the underlying pre...
research
06/24/2021

What will it take to generate fairness-preserving explanations?

In situations where explanations of black-box models may be useful, the ...
research
01/30/2018

The Intriguing Properties of Model Explanations

Linear approximations to the decision boundary of a complex model have b...
research
06/09/2023

Consistent Explanations in the Face of Model Indeterminacy via Ensembling

This work addresses the challenge of providing consistent explanations f...
research
11/03/2020

Decoupling entrainment from consistency using deep neural networks

Human interlocutors tend to engage in adaptive behavior known as entrain...
research
10/18/2022

Global Explanation of Tree-Ensembles Models Based on Item Response Theory

Explainable Artificial Intelligence - XAI is aimed at studying and devel...
research
05/28/2021

Do not explain without context: addressing the blind spot of model explanations

The increasing number of regulations and expectations of predictive mach...

Please sign up or login with your details

Forgot password? Click here to reset