EMAP: Explanation by Minimal Adversarial Perturbation

12/02/2019
by   Matt Chapman-Rounds, et al.
0

Modern instance-based model-agnostic explanation methods (LIME, SHAP, L2X) are of great use in data-heavy industries for model diagnostics, and for end-user explanations. These methods generally return either a weighting or subset of input features as an explanation of the classification of an instance. An alternative literature argues instead that counterfactual instances provide a more useable characterisation of a black box classifier's decisions. We present EMAP, a neural network based approach which returns as Explanation the Minimal Adversarial Perturbation to an instance required to cause the underlying black box model to missclassify. We show that this approach combines the two paradigms, recovering the output of feature-weighting methods in continuous feature spaces, whilst also indicating the direction in which the nearest counterfactuals can be found. Our method also provides an implicit confidence estimate in its own explanations, adding a clarity to model diagnostics other methods lack. Additionally, EMAP improves upon the speed of sampling-based methods such as LIME by an order of magnitude, allowing for model explanations in time-critical applications, or at the dataset level, where sampling-based methods are infeasible. We extend our approach to categorical features using a partitioned Gumbel layer, and demonstrate its efficacy on several standard datasets.

READ FULL TEXT
research
02/17/2022

GRAPHSHAP: Motif-based Explanations for Black-box Graph Classifiers

Most methods for explaining black-box classifiers (e.g., on tabular data...
research
06/22/2022

Explanation-based Counterfactual Retraining(XCR): A Calibration Method for Black-box Models

With the rapid development of eXplainable Artificial Intelligence (XAI),...
research
09/18/2022

EMaP: Explainable AI with Manifold-based Perturbations

In the last few years, many explanation methods based on the perturbatio...
research
09/18/2020

Counterfactual Explanation and Causal Inference in Service of Robustness in Robot Control

We propose an architecture for training generative models of counterfact...
research
08/11/2020

How Much Should I Trust You? Modeling Uncertainty of Black Box Explanations

As local explanations of black box models are increasingly being employe...
research
02/02/2022

Analogies and Feature Attributions for Model Agnostic Explanation of Similarity Learners

Post-hoc explanations for black box models have been studied extensively...
research
07/20/2020

Fairwashing Explanations with Off-Manifold Detergent

Explanation methods promise to make black-box classifiers more transpare...

Please sign up or login with your details

Forgot password? Click here to reset