Embedding Deep Networks into Visual Explanations

09/15/2017
by   Zhongang Qi, et al.
0

In this paper, we propose a novel explanation module to explain the predictions made by deep learning. Explanation modules work by embedding a high-dimensional deep network layer nonlinearly into a low-dimensional explanation space, while retaining faithfulness in that the original deep learning predictions can be constructed from the few concepts extracted by the explanation module. We then visualize such concepts so that human can learn about the high-level concepts deep learning is using to make decisions. We propose an algorithm called Sparse Reconstruction Autoencoder (SRAE) for learning the embedding to the explanation space, SRAE aims to reconstruct part of the original feature space while retaining faithfulness. A visualization system is then introduced for human understanding of features in the explanation space. The proposed method is applied to explain CNN models in image classification tasks, and several novel metrics are introduced to evaluate the performance of explanations quantitatively without human involvement. Experiments show that the proposed approach could generate better explanations of the mechanisms CNN use for making predictions in the task.

READ FULL TEXT

page 5

page 8

page 13

page 14

page 15

research
05/11/2021

Rationalization through Concepts

Automated predictions require explanations to be interpretable by humans...
research
10/26/2017

InterpNET: Neural Introspection for Interpretable Deep Learning

Humans are able to explain their reasoning. On the contrary, deep neural...
research
10/19/2021

Coalitional Bayesian Autoencoders – Towards explainable unsupervised deep learning

This paper aims to improve the explainability of Autoencoder's (AE) pred...
research
11/17/2017

Using KL-divergence to focus Deep Visual Explanation

We present a method for explaining the image classification predictions ...
research
07/10/2020

Scientific Discovery by Generating Counterfactuals using Image Translation

Model explanation techniques play a critical role in understanding the s...
research
07/25/2019

How to Manipulate CNNs to Make Them Lie: the GradCAM Case

Recently many methods have been introduced to explain CNN decisions. How...
research
09/16/2021

Detection Accuracy for Evaluating Compositional Explanations of Units

The recent success of deep learning models in solving complex problems a...

Please sign up or login with your details

Forgot password? Click here to reset