A-FMI: Learning Attributions from Deep Networks via Feature Map Importance

04/12/2021
by   An Zhang, et al.
14

Gradient-based attribution methods can aid in the understanding of convolutional neural networks (CNNs). However, the redundancy of attribution features and the gradient saturation problem, which weaken the ability to identify significant features and cause an explanation focus shift, are challenges that attribution methods still face. In this work, we propose: 1) an essential characteristic, Strong Relevance, when selecting attribution features; 2) a new concept, feature map importance (FMI), to refine the contribution of each feature map, which is faithful to the CNN model; and 3) a novel attribution method via FMI, termed A-FMI, to address the gradient saturation problem, which couples the target image with a reference image, and assigns the FMI to the difference-from-reference at the granularity of feature map. Through visual inspections and qualitative evaluations on the ImageNet dataset, we show the compelling advantages of A-FMI on its faithfulness, insensitivity to the choice of reference, class discriminability, and superior explanation performance compared with popular attribution methods across varying CNN architectures.

READ FULL TEXT

page 8

page 14

page 15

page 16

page 17

page 18

page 19

page 20

research
04/22/2022

Locally Aggregated Feature Attribution on Natural Language Model Understanding

With the growing popularity of deep-learning models, model understanding...
research
03/23/2022

On Understanding the Influence of Controllable Factors with a Feature Attribution Algorithm: a Medical Case Study

Feature attribution XAI algorithms enable their users to gain insight in...
research
05/25/2022

How explainable are adversarially-robust CNNs?

Three important criteria of existing convolutional neural networks (CNNs...
research
02/19/2020

Interpreting Interpretations: Organizing Attribution Methods by Criteria

Attribution methods that explains the behaviour of machine learning mode...
research
10/01/2020

Explaining Convolutional Neural Networks through Attribution-Based Input Sampling and Block-Wise Feature Aggregation

As an emerging field in Machine Learning, Explainable AI (XAI) has been ...
research
12/03/2020

Visualization of Supervised and Self-Supervised Neural Networks via Attribution Guided Factorization

Neural network visualization techniques mark image locations by their re...
research
04/07/2021

Information Bottleneck Attribution for Visual Explanations of Diagnosis and Prognosis

Visual explanation methods have an important role in the prognosis of th...

Please sign up or login with your details

Forgot password? Click here to reset