Unlocking Feature Visualization for Deeper Networks with MAgnitude Constrained Optimization

06/11/2023
by   Thomas Fel, et al.
0

Feature visualization has gained substantial popularity, particularly after the influential work by Olah et al. in 2017, which established it as a crucial tool for explainability. However, its widespread adoption has been limited due to a reliance on tricks to generate interpretable images, and corresponding challenges in scaling it to deeper neural networks. Here, we describe MACO, a simple approach to address these shortcomings. The main idea is to generate images by optimizing the phase spectrum while keeping the magnitude constant to ensure that generated explanations lie in the space of natural images. Our approach yields significantly better results (both qualitatively and quantitatively) and unlocks efficient and interpretable feature visualizations for large state-of-the-art neural networks. We also show that our approach exhibits an attribution mechanism allowing us to augment feature visualizations with spatial importance. We validate our method on a novel benchmark for comparing feature visualization methods, and release its visualizations for all classes of the ImageNet dataset on https://serre-lab.github.io/Lens/. Overall, our approach unlocks, for the first time, feature visualizations for large, state-of-the-art deep neural networks without resorting to any parametric prior image model.

READ FULL TEXT

page 2

page 4

page 7

page 9

page 10

research
10/23/2020

Exemplary Natural Images Explain CNN Activations Better than Feature Visualizations

Feature visualizations such as synthetic maximally activating images are...
research
06/07/2023

Don't trust your eyes: on the (un)reliability of feature visualizations

How do neural networks extract patterns from pixels? Feature visualizati...
research
09/27/2021

Time Series Model Attribution Visualizations as Explanations

Attributions are a common local explanation technique for deep learning ...
research
09/03/2019

Illuminated Decision Trees with Lucid

The Lucid methods described by Olah et al. (2018) provide a way to inspe...
research
06/22/2015

Understanding Neural Networks Through Deep Visualization

Recent years have produced great advances in training large, deep neural...
research
06/11/2023

A Holistic Approach to Unifying Automatic Concept Extraction and Concept Importance Estimation

In recent years, concept-based approaches have emerged as some of the mo...
research
06/23/2021

How Well do Feature Visualizations Support Causal Understanding of CNN Activations?

One widely used approach towards understanding the inner workings of dee...

Please sign up or login with your details

Forgot password? Click here to reset