Towards Visual Explanations for Convolutional Neural Networks via Input Resampling

07/30/2017
by   Benjamin J. Lengerich, et al.
0

The predictive power of neural networks often costs model interpretability. Several techniques have been developed for explaining model outputs in terms of input features; however, it is difficult to translate such interpretations into actionable insight. Here, we propose a framework to analyze predictions in terms of the model's internal features by inspecting information flow through the network. Given a trained network and a test image, we select neurons by two metrics, both measured over a set of images created by perturbations to the input image: (1) magnitude of the correlation between the neuron activation and the network output and (2) precision of the neuron activation. We show that the former metric selects neurons that exert large influence over the network output while the latter metric selects neurons that activate on generalizable features. By comparing the sets of neurons selected by these two metrics, our framework suggests a way to investigate the internal attention mechanisms of convolutional neural networks.

READ FULL TEXT
research
04/22/2021

Continuous Learning and Adaptation with Membrane Potential and Activation Threshold Homeostasis

Most classical (non-spiking) neural network models disregard internal ne...
research
04/21/2019

GAN-based Generation and Automatic Selection of Explanations for Neural Networks

One way to interpret trained deep neural networks (DNNs) is by inspectin...
research
02/11/2018

Understanding Convolutional Networks with APPLE : Automatic Patch Pattern Labeling for Explanation

With the success of deep learning, recent efforts have been focused on a...
research
04/09/2019

A Feature-Value Network as a Brain Model

This paper suggests a statistical framework for describing the relations...
research
09/18/2022

NeuCEPT: Locally Discover Neural Networks' Mechanism via Critical Neurons Identification with Precision Guarantee

Despite recent studies on understanding deep neural networks (DNNs), the...
research
10/18/2020

What do CNN neurons learn: Visualization Clustering

In recent years convolutional neural networks (CNN) have shown striking ...
research
12/23/2020

Analyzing Representations inside Convolutional Neural Networks

How can we discover and succinctly summarize the concepts that a neural ...

Please sign up or login with your details

Forgot password? Click here to reset