Attribution-based Explanations that Provide Recourse Cannot be Robust

05/31/2022
by   Hidde Fokkema, et al.
0

Different users of machine learning methods require different explanations, depending on their goals. To make machine learning accountable to society, one important goal is to get actionable options for recourse, which allow an affected user to change the decision f(x) of a machine learning system by making limited changes to its input x. We formalize this by providing a general definition of recourse sensitivity, which needs to be instantiated with a utility function that describes which changes to the decisions are relevant to the user. This definition applies to local attribution methods, which attribute an importance weight to each input feature. It is often argued that such local attributions should be robust, in the sense that a small change in the input x that is being explained, should not cause a large change in the feature weights. However, we prove formally that it is in general impossible for any single attribution method to be both recourse sensitive and robust at the same time. It follows that there must always exist counterexamples to at least one of these properties. We provide such counterexamples for several popular attribution methods, including LIME, SHAP, Integrated Gradients and SmoothGrad. Our results also cover counterfactual explanations, which may be viewed as attributions that describe a perturbation of x. We further discuss possible ways to work around our impossibility result, for instance by allowing the output to consist of sets with multiple attributions. Finally, we strengthen our impossibility result for the restricted case where users are only able to change a single attribute of x, by providing an exact characterization of the functions f to which impossibility applies.

READ FULL TEXT

page 5

page 6

page 13

research
11/10/2020

Towards Unifying Feature Attribution and Counterfactual Explanations: Different Means to the Same End

To explain a machine learning model, there are two main approaches: feat...
research
01/23/2021

Show or Suppress? Managing Input Uncertainty in Machine Learning Model Explanations

Feature attribution is widely used in interpretable machine learning to ...
research
09/06/2022

Change Detection for Local Explainability in Evolving Data Streams

As complex machine learning models are increasingly used in sensitive ap...
research
06/14/2022

Machines Explaining Linear Programs

There has been a recent push in making machine learning models more inte...
research
08/17/2022

The Counterfactual-Shapley Value: Attributing Change in System Metrics

Given an unexpected change in the output metric of a large-scale system,...
research
05/23/2019

Robust Attribution Regularization

An emerging problem in trustworthy machine learning is to train models t...
research
07/19/2021

Path Integrals for the Attribution of Model Uncertainties

Enabling interpretations of model uncertainties is of key importance in ...

Please sign up or login with your details

Forgot password? Click here to reset