TopoAct: Exploring the Shape of Activations in Deep Learning

12/13/2019
by   Archit Rathore, et al.
0

Deep neural networks such as GoogLeNet and ResNet have achieved superhuman performance in tasks like image classification. To understand how such superior performance is achieved, we can probe a trained deep neural network by studying neuron activations, that is, combinations of neuron firings, at any layer of the network in response to a particular input. With a large set of input images, we aim to obtain a global view of what neurons detect by studying their activations. We ask the following questions: What is the shape of the space of activations? That is, what is the organizational principle behind neuron activations, and how are the activations related within a layer and across layers? Applying tools from topological data analysis, we present TopoAct, a visual exploration system used to study topological summaries of activation vectors for a single layer as well as the evolution of such summaries across multiple layers. We present visual exploration scenarios using TopoAct that provide valuable insights towards learned representations of an image classifier.

READ FULL TEXT

page 2

page 9

page 11

page 12

page 13

page 14

page 15

page 17

research
02/02/2020

Non-linear Neurons with Human-like Apical Dendrite Activations

In order to classify linearly non-separable data, neurons are typically ...
research
04/08/2020

DeepStreamCE: A Streaming Approach to Concept Evolution Detection in Deep Neural Networks

Deep neural networks have experimentally demonstrated superior performan...
research
04/20/2023

Visual DNA: Representing and Comparing Images using Distributions of Neuron Activations

Selecting appropriate datasets is critical in modern computer vision. Ho...
research
12/26/2022

On the Level Sets and Invariance of Neural Tuning Landscapes

Visual representations can be defined as the activations of neuronal pop...
research
01/23/2023

Explaining Deep Learning Hidden Neuron Activations using Concept Induction

One of the current key challenges in Explainable AI is in correctly inte...
research
08/08/2023

Understanding CNN Hidden Neuron Activations Using Structured Background Knowledge and Deductive Reasoning

A major challenge in Explainable AI is in correctly interpreting activat...
research
06/22/2015

Understanding Neural Networks Through Deep Visualization

Recent years have produced great advances in training large, deep neural...

Please sign up or login with your details

Forgot password? Click here to reset