Concept Activation Regions: A Generalized Framework For Concept-Based Explanations

09/22/2022
by   Jonathan Crabbé, et al.
4

Concept-based explanations permit to understand the predictions of a deep neural network (DNN) through the lens of concepts specified by users. Existing methods assume that the examples illustrating a concept are mapped in a fixed direction of the DNN's latent space. When this holds true, the concept can be represented by a concept activation vector (CAV) pointing in that direction. In this work, we propose to relax this assumption by allowing concept examples to be scattered across different clusters in the DNN's latent space. Each concept is then represented by a region of the DNN's latent space that includes these clusters and that we call concept activation region (CAR). To formalize this idea, we introduce an extension of the CAV formalism that is based on the kernel trick and support vector classifiers. This CAR formalism yields global concept-based explanations and local concept-based feature importance. We prove that CAR explanations built with radial kernels are invariant under latent space isometries. In this way, CAR assigns the same explanations to latent spaces that have the same geometry. We further demonstrate empirically that CARs offer (1) more accurate descriptions of how concepts are scattered in the DNN's latent space; (2) global explanations that are closer to human concept annotations and (3) concept-based feature importance that meaningfully relate concepts with each other. Finally, we use CARs to show that DNNs can autonomously rediscover known scientific concepts, such as the prostate cancer grading system.

READ FULL TEXT

page 2

page 4

page 34

page 36

page 38

page 39

page 40

research
07/05/2022

GLANCE: Global to Local Architecture-Neutral Concept-based Explanations

Most of the current explainability techniques focus on capturing the imp...
research
03/04/2022

Concept-based Explanations for Out-Of-Distribution Detectors

Out-of-distribution (OOD) detection plays a crucial role in ensuring the...
research
02/05/2020

Concept Whitening for Interpretable Image Recognition

What does a neural network encode about a concept as we traverse through...
research
04/09/2019

Regression Concept Vectors for Bidirectional Explanations in Histopathology

Explanations for deep neural network predictions in terms of domain-rela...
research
12/23/2020

Analyzing Representations inside Convolutional Neural Networks

How can we discover and succinctly summarize the concepts that a neural ...
research
02/09/2022

Discovering Concepts in Learned Representations using Statistical Inference and Interactive Visualization

Concept discovery is one of the open problems in the interpretability li...
research
07/20/2022

Overlooked factors in concept-based explanations: Dataset choice, concept salience, and human capability

Concept-based interpretability methods aim to explain deep neural networ...

Please sign up or login with your details

Forgot password? Click here to reset