Removing Brightness Bias in Rectified Gradients

11/10/2020
by   Lennart Brocki, et al.
0

Interpretation and improvement of deep neural networks relies on better understanding of their underlying mechanisms. In particular, gradients of classes or concepts with respect to the input features (e.g., pixels in images) are often used as importance scores, which are visualized in saliency maps. Thus, a family of saliency methods provide an intuitive way to identify input features with substantial influences on classifications or latent concepts. Rectified Gradients <cit.> is a new method which introduce layer-wise thresholding in order to denoise the saliency maps. While visually coherent in certain cases, we identify a brightness bias in Rectified Gradients. We demonstrate that dark areas of an input image are not highlighted by a saliency map using Rectified Gradients, even if it is relevant for the class or concept. Even in the scaled images, the bias exists around an artificial point in color spectrum. Our simple modification removes this bias and recovers input features that were removed due to their colors. "No Bias Rectified Gradient" is available at <https://github.com/lenbrocki/NoBias-Rectified-Gradient>

READ FULL TEXT

page 1

page 2

page 3

research
10/29/2019

Concept Saliency Maps to Visualize Relevant Features in Deep Generative Models

Evaluating, explaining, and visualizing high-level concepts in generativ...
research
05/02/2019

Full-Jacobian Representation of Neural Networks

Non-linear functions such as neural networks can be locally approximated...
research
12/01/2020

Rethinking Positive Aggregation and Propagation of Gradients in Gradient-based Saliency Methods

Saliency methods interpret the prediction of a neural network by showing...
research
03/31/2023

Rethinking interpretation: Input-agnostic saliency mapping of deep visual classifiers

Saliency methods provide post-hoc model interpretation by attributing in...
research
07/28/2021

Evaluating the Use of Reconstruction Error for Novelty Localization

The pixelwise reconstruction error of deep autoencoders is often utilize...
research
06/20/2021

CAMERAS: Enhanced Resolution And Sanity preserving Class Activation Mapping for image saliency

Backpropagation image saliency aims at explaining model predictions by e...
research
06/12/2017

SmoothGrad: removing noise by adding noise

Explaining the output of a deep network remains a challenge. In the case...

Please sign up or login with your details

Forgot password? Click here to reset