Extracting Interpretable Concept-Based Decision Trees from CNNs

06/11/2019
by   Conner Chyung, et al.
0

In an attempt to gather a deeper understanding of how convolutional neural networks (CNNs) reason about human-understandable concepts, we present a method to infer labeled concept data from hidden layer activations and interpret the concepts through a shallow decision tree. The decision tree can provide information about which concepts a model deems important, as well as provide an understanding how the concepts interact with each other. Experiments demonstrate that the extracted decision tree is capable of accurately representing the original CNN's classifications at low tree depths, thus encouraging human-in-the-loop understanding of discriminative concepts.

READ FULL TEXT
research
11/19/2022

Concept-based Explanations using Non-negative Concept Activation Vectors and Decision Tree for CNN Models

This paper evaluates whether training a decision tree based on concepts ...
research
11/05/2021

Feature Concepts for Data Federative Innovations

A feature concept, the essence of the data-federative innovation process...
research
11/21/2017

Relating Input Concepts to Convolutional Neural Network Decisions

Many current methods to interpret convolutional neural networks (CNNs) u...
research
02/05/2020

CHAIN: Concept-harmonized Hierarchical Inference Interpretation of Deep Convolutional Neural Networks

With the great success of networks, it witnesses the increasing demand f...
research
02/01/2018

Interpreting CNNs via Decision Trees

This paper presents a method to learn a decision tree to quantitatively ...
research
03/20/2013

Deliberation and its Role in the Formation of Intentions

Deliberation plays an important role in the design of rational agents em...
research
05/10/2023

Data, Trees, and Forests – Decision Tree Learning in K-12 Education

As a consequence of the increasing influence of machine learning on our ...

Please sign up or login with your details

Forgot password? Click here to reset