A Peek Into the Reasoning of Neural Networks: Interpreting with Structural Visual Concepts

05/01/2021
by   Yunhao Ge, et al.
15

Despite substantial progress in applying neural networks (NN) to a wide variety of areas, they still largely suffer from a lack of transparency and interpretability. While recent developments in explainable artificial intelligence attempt to bridge this gap (e.g., by visualizing the correlation between input pixels and final outputs), these approaches are limited to explaining low-level relationships, and crucially, do not provide insights on error correction. In this work, we propose a framework (VRX) to interpret classification NNs with intuitive structural visual concepts. Given a trained classification model, the proposed VRX extracts relevant class-specific visual concepts and organizes them using structural concept graphs (SCG) based on pairwise concept relationships. By means of knowledge distillation, we show VRX can take a step towards mimicking the reasoning process of NNs and provide logical, concept-level explanations for final model decisions. With extensive experiments, we empirically show VRX can meaningfully answer "why" and "why not" questions about the prediction, providing easy-to-understand insights about the reasoning process. We also show that these insights can potentially provide guidance on improving NN's performance.

READ FULL TEXT

page 1

page 3

page 4

page 6

page 7

page 8

page 11

research
05/07/2022

ConceptDistil: Model-Agnostic Distillation of Concept Explanations

Concept-based explanations aims to fill the model interpretability gap f...
research
03/04/2022

Concept-based Explanations for Out-Of-Distribution Detectors

Out-of-distribution (OOD) detection plays a crucial role in ensuring the...
research
04/20/2023

Learning Bottleneck Concepts in Image Classification

Interpreting and explaining the behavior of deep neural networks is crit...
research
05/14/2021

Cause and Effect: Concept-based Explanation of Neural Networks

In many scenarios, human decisions are explained based on some high-leve...
research
07/27/2022

Encoding Concepts in Graph Neural Networks

The opaque reasoning of Graph Neural Networks induces a lack of human tr...
research
06/07/2022

From "Where" to "What": Towards Human-Understandable Explanations through Concept Relevance Propagation

The emerging field of eXplainable Artificial Intelligence (XAI) aims to ...
research
10/07/2022

TCNL: Transparent and Controllable Network Learning Via Embedding Human-Guided Concepts

Explaining deep learning models is of vital importance for understanding...

Please sign up or login with your details

Forgot password? Click here to reset