DeepAI AI Chat
Log In Sign Up

PCACE: A Statistical Approach to Ranking Neurons for CNN Interpretability

12/31/2021
by   Sílvia Casacuberta, et al.
University of Oxford
Harvard University
Imperial College London
9

In this paper we introduce a new problem within the growing literature of interpretability for convolution neural networks (CNNs). While previous work has focused on the question of how to visually interpret CNNs, we ask what it is that we care to interpret, that is, which layers and neurons are worth our attention? Due to the vast size of modern deep learning network architectures, automated, quantitative methods are needed to rank the relative importance of neurons so as to provide an answer to this question. We present a new statistical method for ranking the hidden neurons in any convolutional layer of a network. We define importance as the maximal correlation between the activation maps and the class score. We provide different ways in which this method can be used for visualization purposes with MNIST and ImageNet, and show a real-world application of our method to air pollution prediction with street-level images.

READ FULL TEXT

page 3

page 5

page 6

page 7

page 8

04/30/2018

How convolutional neural network see the world - A survey of convolutional neural network visualization methods

Nowadays, the Convolutional Neural Networks (CNNs) have achieved impress...
01/29/2021

The Mind's Eye: Visualizing Class-Agnostic Features of CNNs

Visual interpretability of Convolutional Neural Networks (CNNs) has gain...
05/06/2019

Deep Visual City Recognition Visualization

Understanding how cities visually differ from each others is interesting...
12/11/2018

Diagnostic Visualization for Deep Neural Networks Using Stochastic Gradient Langevin Dynamics

The internal states of most deep neural networks are difficult to interp...
10/18/2020

What do CNN neurons learn: Visualization Clustering

In recent years convolutional neural networks (CNN) have shown striking ...
07/03/2019

Neuron ranking -- an informed way to condense convolutional neural networks architecture

Convolutional neural networks (CNNs) in recent years have made a dramati...
12/14/2021

Identifying Class Specific Filters with L1 Norm Frequency Histograms in Deep CNNs

Interpretability of Deep Neural Networks has become a major area of expl...