Visual Explanation by Interpretation: Improving Visual Feedback Capabilities of Deep Neural Networks

12/18/2017
by   Jose Oramas, et al.
0

Learning-based representations have become the defacto means to address computer vision tasks. Despite their massive adoption, the amount of work aiming at understanding the internal representations learned by these models is rather limited. Existing methods aimed at model interpretation either require exhaustive manual inspection of visualizations, or link internal network activations with external "possibly useful" annotated concepts. We propose an intermediate scheme in which, given a pretrained model, we automatically identify internal features relevant for the set of classes considered by the model, without requiring additional annotations. We interpret the model through average visualizations of these features. Then, at test time, we explain the network prediction by accompanying the predicted class label with supporting heatmap visualizations derived from the identified relevant features. In addition, we propose a method to address the artifacts introduced by strided operations in deconvnet-based visualizations. Our evaluation on the MNIST, ILSVRC 12 and Fashion 144k datasets quantitatively shows that the proposed method is able to identify relevant internal features for the classes of interest while improving the quality of the produced visualizations.

READ FULL TEXT

page 1

page 2

page 4

page 5

page 6

page 7

page 8

research
12/23/2021

Explaining with Examples: Lessons Learned from Crowdsourced Introductory Description of Information Visualizations

Data visualizations have been increasingly used in oral presentations to...
research
04/22/2020

Assessing the Reliability of Visual Explanations of Deep Models with Adversarial Perturbations

The interest in complex deep neural networks for computer vision applica...
research
05/18/2018

A Theoretical Explanation for Perplexing Behaviors of Backpropagation-based Visualizations

Backpropagation-based visualizations have been proposed to interpret con...
research
05/17/2023

FICNN: A Framework for the Interpretation of Deep Convolutional Neural Networks

With the continue development of Convolutional Neural Networks (CNNs), t...
research
07/31/2020

Saliency-driven Class Impressions for Feature Visualization of Deep Neural Networks

In this paper, we propose a data-free method of extracting Impressions o...
research
03/30/2023

Teru Teru Bōzu: Defensive Raincloud Plots

Univariate visualizations like histograms, rug plots, or box plots provi...
research
08/01/2018

Manifold: A Model-Agnostic Framework for Interpretation and Diagnosis of Machine Learning Models

Interpretation and diagnosis of machine learning models have gained rene...

Please sign up or login with your details

Forgot password? Click here to reset