Using KL-divergence to focus Deep Visual Explanation

We present a method for explaining the image classification predictions of deep convolution neural networks, by highlighting the pixels in the image which influence the final class prediction. Our method requires the identification of a heuristic method to select parameters hypothesized to be most relevant in this prediction, and here we use Kullback-Leibler divergence to provide this focus. Overall, our approach helps in understanding and interpreting deep network predictions and we hope contributes to a foundation for such understanding of deep learning networks. In this brief paper, our experiments evaluate the performance of two popular networks in this context of interpretability.

READ FULL TEXT

page 3

page 4

research
09/15/2017

Embedding Deep Networks into Visual Explanations

In this paper, we propose a novel explanation module to explain the pred...
research
08/28/2017

Explainable Artificial Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models

With the availability of large databases and recent improvements in deep...
research
05/02/2019

Visualizing Deep Networks by Optimizing with Integrated Gradients

Understanding and interpreting the decisions made by deep learning model...
research
06/24/2017

Methods for Interpreting and Understanding Deep Neural Networks

This paper provides an entry point to the problem of interpreting a deep...
research
04/04/2019

Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations

Deep learning is increasingly used in decision-making tasks. However, un...
research
04/09/2019

Software and application patterns for explanation methods

Deep neural networks successfully pervaded many applications domains and...
research
06/12/2017

SmoothGrad: removing noise by adding noise

Explaining the output of a deep network remains a challenge. In the case...

Please sign up or login with your details

Forgot password? Click here to reset