The Mind's Eye: Visualizing Class-Agnostic Features of CNNs

01/29/2021
by   Alexandros Stergiou, et al.
0

Visual interpretability of Convolutional Neural Networks (CNNs) has gained significant popularity because of the great challenges that CNN complexity imposes to understanding their inner workings. Although many techniques have been proposed to visualize class features of CNNs, most of them do not provide a correspondence between inputs and the extracted features in specific layers. This prevents the discovery of stimuli that each layer responds better to. We propose an approach to visually interpret CNN features given a set of images by creating corresponding images that depict the most informative features of a specific layer. Exploring features in this class-agnostic manner allows for a greater focus on the feature extractor of CNNs. Our method uses a dual-objective activation maximization and distance minimization loss, without requiring a generator network nor modifications to the original model. This limits the number of FLOPs to that of the original network. We demonstrate the visualization quality on widely-used architectures.

READ FULL TEXT

page 1

page 3

page 4

research
04/30/2018

How convolutional neural network see the world - A survey of convolutional neural network visualization methods

Nowadays, the Convolutional Neural Networks (CNNs) have achieved impress...
research
01/21/2019

Unsupervised Learning of Neural Networks to Explain Neural Networks (extended abstract)

This paper presents an unsupervised method to learn a neural network, na...
research
12/31/2021

PCACE: A Statistical Approach to Ranking Neurons for CNN Interpretability

In this paper we introduce a new problem within the growing literature o...
research
02/07/2020

Learning Class Regularized Features for Action Recognition

Training Deep Convolutional Neural Networks (CNNs) is based on the notio...
research
12/14/2021

Identifying Class Specific Filters with L1 Norm Frequency Histograms in Deep CNNs

Interpretability of Deep Neural Networks has become a major area of expl...
research
06/20/2022

Neural Activation Patterns (NAPs): Visual Explainability of Learned Concepts

A key to deciphering the inner workings of neural networks is understand...
research
10/23/2020

Exemplary Natural Images Explain CNN Activations Better than Feature Visualizations

Feature visualizations such as synthetic maximally activating images are...

Please sign up or login with your details

Forgot password? Click here to reset