Exemplary Natural Images Explain CNN Activations Better than Feature Visualizations

10/23/2020
by   Judy Borowski, et al.
13

Feature visualizations such as synthetic maximally activating images are a widely used explanation method to better understand the information processing of convolutional neural networks (CNNs). At the same time, there are concerns that these visualizations might not accurately represent CNNs' inner workings. Here, we measure how much extremely activating images help humans to predict CNN activations. Using a well-controlled psychophysical paradigm, we compare the informativeness of synthetic images (Olah et al., 2017) with a simple baseline visualization, namely exemplary natural images that also strongly activate a specific feature map. Given either synthetic or natural reference images, human participants choose which of two query images leads to strong positive activation. The experiment is designed to maximize participants' performance, and is the first to probe intermediate instead of final layer representations. We find that synthetic images indeed provide helpful information about feature map activations (82 However, natural images-originally intended to be a baseline-outperform synthetic images by a wide margin (92 are faster and more confident for natural images, whereas subjective impressions about the interpretability of feature visualization are mixed. The higher informativeness of natural images holds across most layers, for both expert and lay participants as well as for hand- and randomly-picked feature visualizations. Even if only a single reference image is given, synthetic images provide less information than natural images (65 popular synthetic images from feature visualizations are significantly less informative for assessing CNN activations than natural images. We argue that future visualization methods should improve over this simple baseline.

READ FULL TEXT

page 1

page 9

page 10

page 11

page 12

page 13

page 15

page 18

research
06/23/2021

How Well do Feature Visualizations Support Causal Understanding of CNN Activations?

One widely used approach towards understanding the inner workings of dee...
research
06/07/2023

Don't trust your eyes: on the (un)reliability of feature visualizations

How do neural networks extract patterns from pixels? Feature visualizati...
research
06/11/2023

Unlocking Feature Visualization for Deeper Networks with MAgnitude Constrained Optimization

Feature visualization has gained substantial popularity, particularly af...
research
06/22/2015

Understanding Neural Networks Through Deep Visualization

Recent years have produced great advances in training large, deep neural...
research
12/07/2015

Visualizing Deep Convolutional Neural Networks Using Natural Pre-Images

Image representations, from SIFT and bag of visual words to Convolutiona...
research
01/29/2021

The Mind's Eye: Visualizing Class-Agnostic Features of CNNs

Visual interpretability of Convolutional Neural Networks (CNNs) has gain...
research
06/22/2023

Targeted Background Removal Creates Interpretable Feature Visualizations

Feature visualization is used to visualize learned features for black bo...

Please sign up or login with your details

Forgot password? Click here to reset