Uncovering Unique Concept Vectors through Latent Space Decomposition

07/13/2023
by   Mara Graziani, et al.
0

Interpreting the inner workings of deep learning models is crucial for establishing trust and ensuring model safety. Concept-based explanations have emerged as a superior approach that is more interpretable than feature attribution estimates such as pixel saliency. However, defining the concepts for the interpretability analysis biases the explanations by the user's expectations on the concepts. To address this, we propose a novel post-hoc unsupervised method that automatically uncovers the concepts learned by deep models during training. By decomposing the latent space of a layer in singular vectors and refining them by unsupervised clustering, we uncover concept vectors aligned with directions of high variance that are relevant to the model prediction, and that point to semantically distinct concepts. Our extensive experiments reveal that the majority of our concepts are readily understandable to humans, exhibit coherency, and bear relevance to the task at hand. Moreover, we showcase the practical utility of our method in dataset exploration, where our concept vectors successfully identify outlier training samples affected by various confounding factors. This novel exploration technique has remarkable versatility to data types and model architectures and it will facilitate the identification of biases and the discovery of sources of error within training data.

READ FULL TEXT

page 6

page 10

page 11

page 14

page 15

page 17

page 19

page 20

research
02/25/2022

Human-Centered Concept Explanations for Neural Networks

Understanding complex machine learning models such as deep neural networ...
research
07/24/2023

Concept-based explainability for an EEG transformer model

Deep learning models are complex due to their size, structure, and inher...
research
08/18/2023

From Hope to Safety: Unlearning Biases of Deep Models by Enforcing the Right Reasons in Latent Space

Deep Neural Networks are prone to learning spurious correlations embedde...
research
11/21/2022

Revealing Hidden Context Bias in Segmentation and Object Detection through Concept-specific Explanations

Applying traditional post-hoc attribution methods to segmentation or obj...
research
04/29/2022

Concept Activation Vectors for Generating User-Defined 3D Shapes

We explore the interpretability of 3D geometric deep learning models in ...
research
03/19/2023

Unsupervised Interpretable Basis Extraction for Concept-Based Visual Explanations

An important line of research attempts to explain CNN image classifier p...
research
04/06/2021

Robust Semantic Interpretability: Revisiting Concept Activation Vectors

Interpretability methods for image classification assess model trustwort...

Please sign up or login with your details

Forgot password? Click here to reset