CHAIN: Concept-harmonized Hierarchical Inference Interpretation of Deep Convolutional Neural Networks

02/05/2020
by   Dan Wang, et al.
6

With the great success of networks, it witnesses the increasing demand for the interpretation of the internal network mechanism, especially for the net decision-making logic. To tackle the challenge, the Concept-harmonized HierArchical INference (CHAIN) is proposed to interpret the net decision-making process. For net-decisions being interpreted, the proposed method presents the CHAIN interpretation in which the net decision can be hierarchically deduced into visual concepts from high to low semantic levels. To achieve it, we propose three models sequentially, i.e., the concept harmonizing model, the hierarchical inference model, and the concept-harmonized hierarchical inference model. Firstly, in the concept harmonizing model, visual concepts from high to low semantic-levels are aligned with net-units from deep to shallow layers. Secondly, in the hierarchical inference model, the concept in a deep layer is disassembled into units in shallow layers. Finally, in the concept-harmonized hierarchical inference model, a deep-layer concept is inferred from its shallow-layer concepts. After several rounds, the concept-harmonized hierarchical inference is conducted backward from the highest semantic level to the lowest semantic level. Finally, net decision-making is explained as a form of concept-harmonized hierarchical inference, which is comparable to human decision-making. Meanwhile, the net layer structure for feature learning can be explained based on the hierarchical visual concepts. In quantitative and qualitative experiments, we demonstrate the effectiveness of CHAIN at the instance and class levels.

READ FULL TEXT

page 1

page 2

page 7

page 9

page 10

page 11

page 12

page 13

research
06/11/2019

Extracting Interpretable Concept-Based Decision Trees from CNNs

In an attempt to gather a deeper understanding of how convolutional neur...
research
05/14/2021

Cause and Effect: Concept-based Explanation of Neural Networks

In many scenarios, human decisions are explained based on some high-leve...
research
11/19/2022

I saw, I conceived, I concluded: Progressive Concepts as Bottlenecks

Concept bottleneck models (CBMs) include a bottleneck of human-interpret...
research
02/10/2020

Adversarial TCAV – Robust and Effective Interpretation of Intermediate Layers in Neural Networks

Interpreting neural network decisions and the information learned in int...
research
03/04/2014

Clustering Concept Chains from Ordered Data without Path Descriptions

This paper describes a process for clustering concepts into chains from ...
research
09/18/2017

How intelligent are convolutional neural networks?

Motivated by the Gestalt pattern theory, and the Winograd Challenge for ...
research
08/23/2021

Interpreting Face Inference Models using Hierarchical Network Dissection

This paper presents Hierarchical Network Dissection, a general pipeline ...

Please sign up or login with your details

Forgot password? Click here to reset