Best of both worlds: local and global explanations with human-understandable concepts

06/16/2021
by   Jessica Schrouff, et al.
8

Interpretability techniques aim to provide the rationale behind a model's decision, typically by explaining either an individual prediction (local explanation, e.g. `why is this patient diagnosed with this condition') or a class of predictions (global explanation, e.g. `why are patients diagnosed with this condition in general'). While there are many methods focused on either one, few frameworks can provide both local and global explanations in a consistent manner. In this work, we combine two powerful existing techniques, one local (Integrated Gradients, IG) and one global (Testing with Concept Activation Vectors), to provide local, and global concept-based explanations. We first validate our idea using two synthetic datasets with a known ground truth, and further demonstrate with a benchmark natural image dataset. We test our method with various concepts, target classes, model architectures and IG baselines. We show that our method improves global explanations over TCAV when compared to ground truth, and provides useful insights. We hope our work provides a step towards building bridges between many existing local and global methods to get the best of both worlds.

READ FULL TEXT

Authors

page 4

page 17

page 19

07/09/2018

Supervised Local Modeling for Interpretability

Model interpretability is an increasingly important component of practic...
06/07/2022

From "Where" to "What": Towards Human-Understandable Explanations through Concept Relevance Propagation

The emerging field of eXplainable Artificial Intelligence (XAI) aims to ...
06/27/2020

Improving Interpretability of CNN Models Using Non-Negative Concept Activation Vectors

Convolutional neural network (CNN) models for computer vision are powerf...
06/04/2021

Evaluation of Local Model-Agnostic Explanations Using Ground Truth

Explanation techniques are commonly evaluated using human-grounded metho...
10/09/2017

Unifying Local and Global Change Detection in Dynamic Networks

Many real-world networks are complex dynamical systems, where both local...
09/23/2020

The Struggles of Feature-Based Explanations: Shapley Values vs. Minimal Sufficient Subsets

For neural models to garner widespread public trust and ensure fairness,...
03/16/2020

Towards Ground Truth Evaluation of Visual Explanations

Several methods have been proposed to explain the decisions of neural ne...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.