Scrutinizing XAI using linear ground-truth data with suppressor variables

11/14/2021
by   Rick Wilming, et al.
10

Machine learning (ML) is increasingly often used to inform high-stakes decisions. As complex ML models (e.g., deep neural networks) are often considered black boxes, a wealth of procedures has been developed to shed light on their inner workings and the ways in which their predictions come about, defining the field of 'explainable AI' (XAI). Saliency methods rank input features according to some measure of 'importance'. Such methods are difficult to validate since a formal definition of feature importance is, thus far, lacking. It has been demonstrated that some saliency methods can highlight features that have no statistical association with the prediction target (suppressor variables). To avoid misinterpretations due to such behavior, we propose the actual presence of such an association as a necessary condition and objective preliminary definition for feature importance. We carefully crafted a ground-truth dataset in which all statistical dependencies are well-defined and linear, serving as a benchmark to study the problem of suppressor variables. We evaluate common explanation methods including LRP, DTD, PatternNet, PatternAttribution, LIME, Anchors, SHAP, and permutation-based methods with respect to our objective definition. We show that most of these methods are unable to distinguish important features from suppressors in this setting.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 9

page 17

page 18

page 19

04/23/2020

Evaluating Adversarial Robustness for Deep Neural Network Interpretability using fMRI Decoding

While deep neural networks (DNNs) are being increasingly used to make pr...
05/20/2021

Evaluating the Correctness of Explainable AI Algorithms for Classification

Explainable AI has attracted much research attention in recent years wit...
11/22/2016

Feature Importance Measure for Non-linear Learning Algorithms

Complex problems may require sophisticated, non-linear learning methods ...
09/22/2020

What Do You See? Evaluation of Explainable Artificial Intelligence (XAI) Interpretability through Neural Backdoors

EXplainable AI (XAI) methods have been proposed to interpret how a deep ...
09/03/2021

Relating the Partial Dependence Plot and Permutation Feature Importance to the Data Generating Process

Scientists and practitioners increasingly rely on machine learning to mo...
07/20/2021

Shared Interest: Large-Scale Visual Analysis of Model Behavior by Measuring Human-AI Alignment

Saliency methods – techniques to identify the importance of input featur...
08/08/2017

Learning Visual Importance for Graphic Designs and Data Visualizations

Knowing where people look and click on visual designs can provide clues ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.