"Is your explanation stable?": A Robustness Evaluation Framework for Feature Attribution

09/05/2022
by   Yuyou Gan, et al.
2

Understanding the decision process of neural networks is hard. One vital method for explanation is to attribute its decision to pivotal features. Although many algorithms are proposed, most of them solely improve the faithfulness to the model. However, the real environment contains many random noises, which may leads to great fluctuations in the explanations. More seriously, recent works show that explanation algorithms are vulnerable to adversarial attacks. All of these make the explanation hard to trust in real scenarios. To bridge this gap, we propose a model-agnostic method Median Test for Feature Attribution (MeTFA) to quantify the uncertainty and increase the stability of explanation algorithms with theoretical guarantees. MeTFA has the following two functions: (1) examine whether one feature is significantly important or unimportant and generate a MeTFA-significant map to visualize the results; (2) compute the confidence interval of a feature attribution score and generate a MeTFA-smoothed map to increase the stability of the explanation. Experiments show that MeTFA improves the visual quality of explanations and significantly reduces the instability while maintaining the faithfulness. To quantitatively evaluate the faithfulness of an explanation under different noise settings, we further propose several robust faithfulness metrics. Experiment results show that the MeTFA-smoothed explanation can significantly increase the robust faithfulness. In addition, we use two scenarios to show MeTFA's potential in the applications. First, when applied to the SOTA explanation method to locate context bias for semantic segmentation models, MeTFA-significant explanations use far smaller regions to maintain 99%+ faithfulness. Second, when tested with different explanation-oriented attacks, MeTFA can help defend vanilla, as well as adaptive, adversarial attacks against explanations.

READ FULL TEXT

page 2

page 5

page 7

page 10

page 12

page 18

research
03/14/2022

Rethinking Stability for Attribution-based Explanations

As attribution-based explanation methods are increasingly used to establ...
research
03/04/2022

Do Explanations Explain? Model Knows Best

It is a mystery which input features contribute to a neural network's ou...
research
04/07/2021

Information Bottleneck Attribution for Visual Explanations of Diagnosis and Prognosis

Visual explanation methods have an important role in the prognosis of th...
research
06/18/2021

NoiseGrad: enhancing explanations by introducing stochasticity to model weights

Attribution methods remain a practical instrument that is used in real-w...
research
11/08/2021

Defense Against Explanation Manipulation

Explainable machine learning attracts increasing attention as it improve...
research
04/11/2022

Generalizing Adversarial Explanations with Grad-CAM

Gradient-weighted Class Activation Mapping (Grad- CAM), is an example-ba...
research
12/28/2022

Robust Ranking Explanations

Gradient-based explanation is the cornerstone of explainable deep networ...

Please sign up or login with your details

Forgot password? Click here to reset