Fairwashing Explanations with Off-Manifold Detergent

by   Christopher J. Anders, et al.

Explanation methods promise to make black-box classifiers more transparent. As a result, it is hoped that they can act as proof for a sensible, fair and trustworthy decision-making process of the algorithm and thereby increase its acceptance by the end-users. In this paper, we show both theoretically and experimentally that these hopes are presently unfounded. Specifically, we show that, for any classifier g, one can always construct another classifier g̃ which has the same behavior on the data (same train, validation, and test error) but has arbitrarily manipulated explanation maps. We derive this statement theoretically using differential geometry and demonstrate it experimentally for various explanation methods, architectures, and datasets. Motivated by our theoretical insights, we then propose a modification of existing explanation methods which makes them significantly more robust.


page 6

page 16

page 17

page 18

page 19

page 20

page 21


Explanations can be manipulated and geometry is to blame

Explanation methods aim to make neural networks more trustworthy and int...

Fooling Explanations in Text Classifiers

State-of-the-art text classification models are becoming increasingly re...

Interpretable & Explorable Approximations of Black Box Models

We propose Black Box Explanations through Transparent Approximations (BE...

Framework for Evaluating Faithfulness of Local Explanations

We study the faithfulness of an explanation system to the underlying pre...

Explaining Natural Language Processing Classifiers with Occlusion and Language Modeling

Deep neural networks are powerful statistical learners. However, their p...

Towards Robust Explanations for Deep Neural Networks

Explanation methods shed light on the decision process of black-box clas...

Evaluating Explanation Methods for Neural Machine Translation

Recently many efforts have been devoted to interpreting the black-box NM...