Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations

04/04/2019
by   Fred Hohman, et al.
12

Deep learning is increasingly used in decision-making tasks. However, understanding how neural networks produce final predictions remains a fundamental challenge. Existing work on interpreting neural network predictions for images often focuses on explaining predictions for single images or neurons. As predictions are often computed based off of millions of weights that are optimized over millions of images, such explanations can easily miss a bigger picture. We present Summit, the first interactive system that scalably and systematically summarizes and visualizes what features a deep learning model has learned and how those features interact to make predictions. Summit introduces two new scalable summarization techniques: (1) activation aggregation discovers important neurons, and (2) neuron-influence aggregation identifies relationships among such neurons. Summit combines these techniques to create the novel attribution graph that reveals and summarizes crucial neuron associations and substructures that contribute to a model's outcomes. Summit scales to large data, such as the ImageNet dataset with 1.2M images, and leverages neural network feature visualization and dataset examples to help users distill large, complex neural network models into compact, interactive visualizations. We present neural network exploration scenarios where Summit helps us discover multiple surprising insights into a state-of-the-art image classifier's learned representations and informs future neural network architecture design. The Summit visualization runs in modern web browsers and is open-sourced.

READ FULL TEXT

page 1

page 2

page 3

page 8

page 9

research
01/23/2020

Visual Summary of Value-level Feature Attribution in Prediction Classes with Recurrent Neural Networks

Deep Recurrent Neural Networks (RNN) is increasingly used in decision-ma...
research
08/29/2021

NeuroCartography: Scalable Automatic Visual Summarization of Concepts in Deep Neural Networks

Existing research on making sense of deep neural networks often focuses ...
research
04/06/2017

ActiVis: Visual Exploration of Industry-Scale Deep Neural Network Models

While deep learning models have achieved state-of-the-art accuracies for...
research
06/02/2019

NeuralDivergence: Exploring and Understanding Neural Networks by Comparing Activation Distributions

As deep neural networks are increasingly used in solving high-stake prob...
research
02/06/2019

Global Explanations of Neural Networks: Mapping the Landscape of Predictions

A barrier to the wider adoption of neural networks is their lack of inte...
research
11/17/2017

Using KL-divergence to focus Deep Visual Explanation

We present a method for explaining the image classification predictions ...
research
05/23/2022

Gradient Hedging for Intensively Exploring Salient Interpretation beyond Neuron Activation

Hedging is a strategy for reducing the potential risks in various types ...

Please sign up or login with your details

Forgot password? Click here to reset