Do Explanations Reflect Decisions? A Machine-centric Strategy to Quantify the Performance of Explainability Algorithms

10/16/2019
by   Zhong Qiu Lin, et al.
0

There has been a significant surge of interest recently around the concept of explainable artificial intelligence (XAI), where the goal is to produce an interpretation for a decision made by a machine learning algorithm. Of particular interest is the interpretation of how deep neural networks make decisions, given the complexity and `black box' nature of such networks. Given the infancy of the field, there has been very limited exploration into the assessment of the performance of explainability methods, with most evaluations centered around subjective visual interpretation of the produced interpretations. In this study, we explore a more machine-centric strategy for quantifying the performance of explainability methods on deep neural networks via the notion of decision-making impact analysis. We introduce two quantitative performance metrics: i) Impact Score, which assesses the percentage of critical factors with either strong confidence reduction impact or decision changing impact, and ii) Impact Coverage, which assesses the percentage coverage of adversarially impacted factors in the input. A comprehensive analysis using this approach was conducted on several state-of-the-art explainability methods (LIME, SHAP, Expected Gradients, GSInquire) on a ResNet-50 deep convolutional neural network using a subset of ImageNet for the task of image classification. Experimental results show that the critical regions identified by LIME within the tested images had the lowest impact on the decision-making process of the network ( 38 increase in decision-making impact for SHAP ( 44 and GSInquire ( 76 machine-centric strategy helps push the conversation forward towards better metrics for evaluating explainability methods and improve trust in deep neural networks.

READ FULL TEXT
research
10/16/2019

Explaining with Impact: A Machine-centric Strategy to Quantify the Performance of Explainability Algorithms

There has been a significant surge of interest recently around the conce...
research
08/02/2022

Diagnosis of Paratuberculosis in Histopathological Images Based on Explainable Artificial Intelligence and Deep Learning

Artificial intelligence holds great promise in medical imaging, especial...
research
11/16/2022

Using explainability to design physics-aware CNNs for solving subsurface inverse problems

We present a novel method of using explainability techniques to design p...
research
05/16/2019

Knowledge-Based Sequential Decision-Making Under Uncertainty

Deep reinforcement learning (DRL) algorithms have achieved great success...
research
03/30/2022

How Deep is Your Art: An Experimental Study on the Limits of Artistic Understanding in a Single-Task, Single-Modality Neural Network

Mathematical modeling and aesthetic rule extraction of works of art are ...
research
02/15/2022

Don't Lie to Me! Robust and Efficient Explainability with Verified Perturbation Analysis

A variety of methods have been proposed to try to explain how deep neura...
research
09/07/2020

Quantifying Explainability of Saliency Methods in Deep Neural Networks

One way to achieve eXplainable artificial intelligence (XAI) is through ...

Please sign up or login with your details

Forgot password? Click here to reset