DeepAI AI Chat
Log In Sign Up

Visualization of Supervised and Self-Supervised Neural Networks via Attribution Guided Factorization

by   Shir Gur, et al.

Neural network visualization techniques mark image locations by their relevancy to the network's classification. Existing methods are effective in highlighting the regions that affect the resulting classification the most. However, as we show, these methods are limited in their ability to identify the support for alternative classifications, an effect we name the saliency bias hypothesis. In this work, we integrate two lines of research: gradient-based methods and attribution-based methods, and develop an algorithm that provides per-class explainability. The algorithm back-projects the per pixel local influence, in a manner that is guided by the local attributions, while correcting for salient features that would otherwise bias the explanation. In an extensive battery of experiments, we demonstrate the ability of our methods to class-specific visualization, and not just the predicted label. Remarkably, the method obtains state of the art results in benchmarks that are commonly applied to gradient-based methods as well as in those that are employed mostly for evaluating attribution methods. Using a new unsupervised procedure, our method is also successful in demonstrating that self-supervised methods learn semantic information.


page 24

page 25

page 26

page 27

page 28

page 29

page 30

page 31


A-FMI: Learning Attributions from Deep Networks via Feature Map Importance

Gradient-based attribution methods can aid in the understanding of convo...

IMACS: Image Model Attribution Comparison Summaries

Developing a suitable Deep Neural Network (DNN) often requires significa...

Human Interpretation of Saliency-based Explanation Over Text

While a lot of research in explainable AI focuses on producing effective...

Robust Explainability: A Tutorial on Gradient-Based Attribution Methods for Deep Neural Networks

With the rise of deep neural networks, the challenge of explaining the p...

Guided Integrated Gradients: An Adaptive Path Method for Removing Noise

Integrated Gradients (IG) is a commonly used feature attribution method ...

Explaining decision of model from its prediction

This document summarizes different visual explanations methods such as C...

Scaling Symbolic Methods using Gradients for Neural Model Explanation

Symbolic techniques based on Satisfiability Modulo Theory (SMT) solvers ...