NoiseGrad: enhancing explanations by introducing stochasticity to model weights

06/18/2021
by   Kirill Bykov, et al.
10

Attribution methods remain a practical instrument that is used in real-world applications to explain the decision-making process of complex learning machines. It has been shown that a simple method called SmoothGrad can effectively reduce the visual diffusion of gradient-based attribution methods and has established itself among both researchers and practitioners. What remains unexplored in research, however, is how explanations can be improved by introducing stochasticity to the model weights. In the light of this, we introduce - NoiseGrad - a stochastic, method-agnostic explanation-enhancing method that adds noise to the weights instead of the input data. We investigate our proposed method through various experiments including different datasets, explanation methods and network architectures and conclude that NoiseGrad (and its extension NoiseGrad++) with multiplicative Gaussian noise offers a clear advantage compared to SmoothGrad on several evaluation criteria. We connect our proposed method to Bayesian Learning and provide the user with a heuristic for choosing hyperparameters.

READ FULL TEXT

page 2

page 4

page 5

page 8

page 9

page 15

page 19

page 20

research
06/01/2022

OmniXAI: A Library for Explainable AI

We introduce OmniXAI, an open-source Python library of eXplainable AI (X...
research
07/05/2023

Harmonizing Feature Attributions Across Deep Learning Architectures: Enhancing Interpretability and Consistency

Ensuring the trustworthiness and interpretability of machine learning mo...
research
09/05/2022

"Is your explanation stable?": A Robustness Evaluation Framework for Feature Attribution

Understanding the decision process of neural networks is hard. One vital...
research
10/01/2021

LEMON: Explainable Entity Matching

State-of-the-art entity matching (EM) methods are hard to interpret, and...
research
07/12/2023

Stability Guarantees for Feature Attributions with Multiplicative Smoothing

Explanation methods for machine learning models tend to not provide any ...
research
04/07/2021

Information Bottleneck Attribution for Visual Explanations of Diagnosis and Prognosis

Visual explanation methods have an important role in the prognosis of th...
research
06/15/2022

The Manifold Hypothesis for Gradient-Based Explanations

When do gradient-based explanation algorithms provide meaningful explana...

Please sign up or login with your details

Forgot password? Click here to reset