Locally Invariant Explanations: Towards Stable and Unidirectional Explanations through Local Invariant Learning

01/28/2022
by   Amit Dhurandhar, et al.
0

Locally interpretable model agnostic explanations (LIME) method is one of the most popular methods used to explain black-box models at a per example level. Although many variants have been proposed, few provide a simple way to produce high fidelity explanations that are also stable and intuitive. In this work, we provide a novel perspective by proposing a model agnostic local explanation method inspired by the invariant risk minimization (IRM) principle – originally proposed for (global) out-of-distribution generalization – to provide such high fidelity explanations that are also stable and unidirectional across nearby examples. Our method is based on a game theoretic formulation where we theoretically show that our approach has a strong tendency to eliminate features where the gradient of the black-box function abruptly changes sign in the locality of the example we want to explain, while in other cases it is more careful and will choose a more conservative (feature) attribution, a behavior which can be highly desirable for recourse. Empirically, we show on tabular, image and text data that the quality of our explanations with neighborhoods formed using random perturbations are much better than LIME and in some cases even comparable to other methods that use realistic neighbors sampled from the data manifold. This is desirable given that learning a manifold to either create realistic neighbors or to project explanations is typically expensive or may even be impossible. Moreover, our algorithm is simple and efficient to train, and can ascertain stable input features for local decisions of a black-box without access to side information such as a (partial) causal graph as has been seen in some recent works.

READ FULL TEXT

page 8

page 22

page 23

page 24

page 25

research
07/05/2019

Global Aggregations of Local Explanations for Black Box models

The decision-making process of many state-of-the-art machine learning mo...
research
09/03/2020

Model extraction from counterfactual explanations

Post-hoc explanation techniques refer to a posteriori methods that can b...
research
01/12/2022

SLISEMAP: Explainable Dimensionality Reduction

Existing explanation methods for black-box supervised learning models ge...
research
03/20/2020

Locally Interpretable Predictions of Parkinson's Disease Progression

In precision medicine, machine learning techniques have been commonly pr...
research
01/10/2023

Manifold Restricted Interventional Shapley Values

Shapley values are model-agnostic methods for explaining model predictio...
research
08/02/2022

s-LIME: Reconciling Locality and Fidelity in Linear Explanations

The benefit of locality is one of the major premises of LIME, one of the...
research
03/12/2020

Model Agnostic Multilevel Explanations

In recent years, post-hoc local instance-level and global dataset-level ...

Please sign up or login with your details

Forgot password? Click here to reset