Towards the Unification and Robustness of Perturbation and Gradient Based Explanations

02/21/2021
by   Sushant Agarwal, et al.
10

As machine learning black boxes are increasingly being deployed in critical domains such as healthcare and criminal justice, there has been a growing emphasis on developing techniques for explaining these black boxes in a post hoc manner. In this work, we analyze two popular post hoc interpretation techniques: SmoothGrad which is a gradient based method, and a variant of LIME which is a perturbation based method. More specifically, we derive explicit closed form expressions for the explanations output by these two methods and show that they both converge to the same explanation in expectation, i.e., when the number of perturbed samples used by these methods is large. We then leverage this connection to establish other desirable properties, such as robustness, for these techniques. We also derive finite sample complexity bounds for the number of perturbations required for these methods to converge to their expected explanation. Finally, we empirically validate our theory using extensive experimentation on both synthetic and real world datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/06/2019

How can we fool LIME and SHAP? Adversarial Attacks on Post hoc Explanation Methods

As machine learning black boxes are increasingly being deployed in domai...
research
11/12/2020

Robust and Stable Black Box Explanations

As machine learning black boxes are increasingly being deployed in real-...
research
06/15/2021

S-LIME: Stabilized-LIME for Model Explanation

An increasing number of machine learning models have been deployed in do...
research
12/16/2022

Robust Explanation Constraints for Neural Networks

Post-hoc explanation methods are used with the intent of providing insig...
research
06/15/2021

On the Objective Evaluation of Post Hoc Explainers

Many applications of data-driven models demand transparency of decisions...
research
06/28/2022

On the amplification of security and privacy risks by post-hoc explanations in machine learning models

A variety of explanation methods have been proposed in recent years to h...
research
06/18/2021

Rational Shapley Values

Explaining the predictions of opaque machine learning algorithms is an i...

Please sign up or login with your details

Forgot password? Click here to reset