Concept-based Explanations using Non-negative Concept Activation Vectors and Decision Tree for CNN Models

11/19/2022
by   Gayda Mutahar, et al.
0

This paper evaluates whether training a decision tree based on concepts extracted from a concept-based explainer can increase interpretability for Convolutional Neural Networks (CNNs) models and boost the fidelity and performance of the used explainer. CNNs for computer vision have shown exceptional performance in critical industries. However, it is a significant barrier when deploying CNNs due to their complexity and lack of interpretability. Recent studies to explain computer vision models have shifted from extracting low-level features (pixel-based explanations) to mid-or high-level features (concept-based explanations). The current research direction tends to use extracted features in developing approximation algorithms such as linear or decision tree models to interpret an original model. In this work, we modify one of the state-of-the-art concept-based explanations and propose an alternative framework named TreeICE. We design a systematic evaluation based on the requirements of fidelity (approximate models to original model's labels), performance (approximate models to ground-truth labels), and interpretability (meaningful of approximate models to humans). We conduct computational evaluation (for fidelity and performance) and human subject experiments (for interpretability) We find that Tree-ICE outperforms the baseline in interpretability and generates more human readable explanations in the form of a semantic tree structure. This work features how important to have more understandable explanations when interpretability is crucial.

READ FULL TEXT

page 1

page 13

page 15

page 19

page 20

page 29

page 32

page 33

research
06/27/2020

Improving Interpretability of CNN Models Using Non-Negative Concept Activation Vectors

Convolutional neural network (CNN) models for computer vision are powerf...
research
06/11/2019

Extracting Interpretable Concept-Based Decision Trees from CNNs

In an attempt to gather a deeper understanding of how convolutional neur...
research
02/01/2018

Interpreting CNNs via Decision Trees

This paper presents a method to learn a decision tree to quantitatively ...
research
04/07/2022

Using Decision Tree as Local Interpretable Model in Autoencoder-based LIME

Nowadays, deep neural networks are being used in many domains because of...
research
07/21/2022

Learning Unsupervised Hierarchies of Audio Concepts

Music signals are difficult to interpret from their low-level features, ...
research
08/11/2023

Scale-Preserving Automatic Concept Extraction (SPACE)

Convolutional Neural Networks (CNN) have become a common choice for indu...
research
05/03/2023

Explaining Language Models' Predictions with High-Impact Concepts

The emergence of large-scale pretrained language models has posed unprec...

Please sign up or login with your details

Forgot password? Click here to reset