PACE: Posthoc Architecture-Agnostic Concept Extractor for Explaining CNNs

08/31/2021
by   Vidhya Kamakshi, et al.
6

Deep CNNs, though have achieved the state of the art performance in image classification tasks, remain a black-box to a human using them. There is a growing interest in explaining the working of these deep models to improve their trustworthiness. In this paper, we introduce a Posthoc Architecture-agnostic Concept Extractor (PACE) that automatically extracts smaller sub-regions of the image called concepts relevant to the black-box prediction. PACE tightly integrates the faithfulness of the explanatory framework to the black-box model. To the best of our knowledge, this is the first work that extracts class-specific discriminative concepts in a posthoc manner automatically. The PACE framework is used to generate explanations for two different CNN architectures trained for classifying the AWA2 and Imagenet-Birds datasets. Extensive human subject experiments are conducted to validate the human interpretability and consistency of the explanations extracted by PACE. The results from these experiments suggest that over 72 the concepts extracted by PACE are human interpretable.

READ FULL TEXT

page 2

page 4

page 6

page 7

research
11/03/2020

MACE: Model Agnostic Concept Extractor for Explaining Image Classification Networks

Deep convolutional networks have been quite successful at various image ...
research
05/07/2022

ConceptDistil: Model-Agnostic Distillation of Concept Explanations

Concept-based explanations aims to fill the model interpretability gap f...
research
10/07/2022

TCNL: Transparent and Controllable Network Learning Via Embedding Human-Guided Concepts

Explaining deep learning models is of vital importance for understanding...
research
09/11/2018

Visualizing Convolutional Neural Networks to Improve Decision Support for Skin Lesion Classification

Because of their state-of-the-art performance in computer vision, CNNs a...
research
04/10/2023

Coherent Concept-based Explanations in Medical Image and Its Application to Skin Lesion Diagnosis

Early detection of melanoma is crucial for preventing severe complicatio...
research
11/21/2022

Language in a Bottle: Language Model Guided Concept Bottlenecks for Interpretable Image Classification

Concept Bottleneck Models (CBM) are inherently interpretable models that...
research
06/25/2019

Interpretable Image Recognition with Hierarchical Prototypes

Vision models are interpretable when they classify objects on the basis ...

Please sign up or login with your details

Forgot password? Click here to reset