Towards More Robust Interpretation via Local Gradient Alignment

11/29/2022
by   Sunghwan Joo, et al.
0

Neural network interpretation methods, particularly feature attribution methods, are known to be fragile with respect to adversarial input perturbations. To address this, several methods for enhancing the local smoothness of the gradient while training have been proposed for attaining robust feature attributions. However, the lack of considering the normalization of the attributions, which is essential in their visualizations, has been an obstacle to understanding and improving the robustness of feature attribution methods. In this paper, we provide new insights by taking such normalization into account. First, we show that for every non-negative homogeneous neural network, a naive ℓ_2-robust criterion for gradients is not normalization invariant, which means that two functions with the same normalized gradient can have different values. Second, we formulate a normalization invariant cosine distance-based criterion and derive its upper bound, which gives insight for why simply minimizing the Hessian norm at the input, as has been done in previous work, is not sufficient for attaining robust feature attribution. Finally, we propose to combine both ℓ_2 and cosine distance-based criteria as regularization terms to leverage the advantages of both in aligning the local gradient. As a result, we experimentally show that models trained with our method produce much more robust interpretations on CIFAR-10 and ImageNet-100 without significantly hurting the accuracy, compared to the recent baselines. To the best of our knowledge, this is the first work to verify the robustness of interpretation on a larger-scale dataset beyond CIFAR-10, thanks to the computational efficiency of our method.

READ FULL TEXT

page 6

page 14

research
06/12/2023

On the Robustness of Removal-Based Feature Attributions

To explain complex models based on their inputs, many feature attributio...
research
05/27/2019

Scaleable input gradient regularization for adversarial robustness

Input gradient regularization is not thought to be an effective means fo...
research
05/15/2022

Exploiting the Relationship Between Kendall's Rank Correlation and Cosine Similarity for Attribution Protection

Model attributions are important in deep neural networks as they aid pra...
research
03/01/2023

A Practical Upper Bound for the Worst-Case Attribution Deviations

Model attribution is a critical component of deep neural networks (DNNs)...
research
05/23/2019

Robust Attribution Regularization

An emerging problem in trustworthy machine learning is to train models t...
research
07/04/2021

Certifiably Robust Interpretation via Renyi Differential Privacy

Motivated by the recent discovery that the interpretation maps of CNNs c...
research
06/14/2022

Attributions Beyond Neural Networks: The Linear Program Case

Linear Programs (LPs) have been one of the building blocks in machine le...

Please sign up or login with your details

Forgot password? Click here to reset