DeepAI AI Chat
Log In Sign Up

Visualization of Supervised and Self-Supervised Neural Networks via Attribution Guided Factorization

12/03/2020
by   Shir Gur, et al.
38

Neural network visualization techniques mark image locations by their relevancy to the network's classification. Existing methods are effective in highlighting the regions that affect the resulting classification the most. However, as we show, these methods are limited in their ability to identify the support for alternative classifications, an effect we name the saliency bias hypothesis. In this work, we integrate two lines of research: gradient-based methods and attribution-based methods, and develop an algorithm that provides per-class explainability. The algorithm back-projects the per pixel local influence, in a manner that is guided by the local attributions, while correcting for salient features that would otherwise bias the explanation. In an extensive battery of experiments, we demonstrate the ability of our methods to class-specific visualization, and not just the predicted label. Remarkably, the method obtains state of the art results in benchmarks that are commonly applied to gradient-based methods as well as in those that are employed mostly for evaluating attribution methods. Using a new unsupervised procedure, our method is also successful in demonstrating that self-supervised methods learn semantic information.

READ FULL TEXT

page 24

page 25

page 26

page 27

page 28

page 29

page 30

page 31

04/12/2021

A-FMI: Learning Attributions from Deep Networks via Feature Map Importance

Gradient-based attribution methods can aid in the understanding of convo...
01/26/2022

IMACS: Image Model Attribution Comparison Summaries

Developing a suitable Deep Neural Network (DNN) often requires significa...
01/27/2022

Human Interpretation of Saliency-based Explanation Over Text

While a lot of research in explainable AI focuses on producing effective...
07/23/2021

Robust Explainability: A Tutorial on Gradient-Based Attribution Methods for Deep Neural Networks

With the rise of deep neural networks, the challenge of explaining the p...
06/17/2021

Guided Integrated Gradients: An Adaptive Path Method for Removing Noise

Integrated Gradients (IG) is a commonly used feature attribution method ...
06/15/2021

Explaining decision of model from its prediction

This document summarizes different visual explanations methods such as C...
06/29/2020

Scaling Symbolic Methods using Gradients for Neural Model Explanation

Symbolic techniques based on Satisfiability Modulo Theory (SMT) solvers ...