Overlooked factors in concept-based explanations: Dataset choice, concept salience, and human capability

07/20/2022
by   Vikram V. Ramaswamy, et al.
43

Concept-based interpretability methods aim to explain deep neural network model predictions using a predefined set of semantic concepts. These methods evaluate a trained model on a new, "probe" dataset and correlate model predictions with the visual concepts labeled in that dataset. Despite their popularity, they suffer from limitations that are not well-understood and articulated by the literature. In this work, we analyze three commonly overlooked factors in concept-based explanations. First, the choice of the probe dataset has a profound impact on the generated explanations. Our analysis reveals that different probe datasets may lead to very different explanations, and suggests that the explanations are not generalizable outside the probe dataset. Second, we find that concepts in the probe dataset are often less salient and harder to learn than the classes they claim to explain, calling into question the correctness of the explanations. We argue that only visually salient concepts should be used in concept-based explanations. Finally, while existing methods use hundreds or even thousands of concepts, our human studies reveal a much stricter upper bound of 32 concepts or less, beyond which the explanations are much less practically useful. We make suggestions for future development and analysis of concept-based interpretability methods. Code for our analysis and user interface can be found at <https://github.com/princetonvisualai/OverlookedFactors>

READ FULL TEXT

page 15

page 24

page 25

page 26

research
02/07/2019

Automating Interpretability: Discovering and Testing Visual Concepts Learned by Neural Networks

Interpretability has become an important topic of research as more machi...
research
03/27/2023

UFO: A unified method for controlling Understandability and Faithfulness Objectives in concept-based explanations for CNNs

Concept-based explanations for convolutional neural networks (CNNs) aim ...
research
04/04/2022

ConceptExplainer: Understanding the Mental Model of Deep Learning Algorithms via Interactive Concept-based Explanations

Traditional deep learning interpretability methods which are suitable fo...
research
11/03/2020

MACE: Model Agnostic Concept Extractor for Explaining Image Classification Networks

Deep convolutional networks have been quite successful at various image ...
research
11/17/2021

Acquisition of Chess Knowledge in AlphaZero

What is learned by sophisticated neural network agents such as AlphaZero...
research
09/22/2022

Concept Activation Regions: A Generalized Framework For Concept-Based Explanations

Concept-based explanations permit to understand the predictions of a dee...
research
01/30/2023

Explaining Dataset Changes for Semantic Data Versioning with Explain-Da-V (Technical Report)

In multi-user environments in which data science and analysis is collabo...

Please sign up or login with your details

Forgot password? Click here to reset