Iterative Partial Fulfillment of Counterfactual Explanations: Benefits and Risks

03/17/2023
by   Yilun Zhou, et al.
0

Counterfactual (CF) explanations, also known as contrastive explanations and recourses, are popular for explaining machine learning model predictions in high-stakes domains. For a subject that receives a negative model prediction (e.g., mortgage application denial), they are similar instances but with positive predictions, which informs the subject of ways to improve. Various properties of CF explanations have been studied, such as validity, feasibility and stability. In this paper, we contribute a novel aspect: their behaviors under iterative partial fulfillment (IPF). Specifically, upon receiving a CF explanation, the subject may only partially fulfills it before requesting a new prediction with a new explanation, and repeat until the prediction is positive. Such partial fulfillment could be due to the subject's limited capability (e.g., can only pay down two out of four credit card accounts at this moment) or an attempt to take the chance (e.g., betting that a monthly salary increase of $800 is enough even though $1,000 is recommended). Does such iterative partial fulfillment increase or decrease the total cost of improvement incurred by the subject? We first propose a mathematical formalization of IPF and then demonstrate, both theoretically and empirically, that different CF algorithms exhibit vastly different behaviors under IPF and hence different effects on the subject's welfare, warranting this factor to be considered in the studies of CF algorithms. We discuss implications of our observations and give several directions for future work.

READ FULL TEXT

page 1

page 7

page 9

research
11/13/2018

Interpretable Credit Application Predictions With Counterfactual Explanations

We predict credit applications with off-the-shelf, interchangeable black...
research
01/30/2022

Debiased-CAM to mitigate systematic error with faithful visual explanations of machine learning

Model explanations such as saliency maps can improve user trust in AI by...
research
03/16/2023

Explaining Groups of Instances Counterfactually for XAI: A Use Case, Algorithm and User Study for Group-Counterfactuals

Counterfactual explanations are an increasingly popular form of post hoc...
research
01/21/2020

Adequate and fair explanations

Explaining sophisticated machine-learning based systems is an important ...
research
04/02/2023

The Effect of Counterfactuals on Reading Chest X-rays

This study evaluates the effect of counterfactual explanations on the in...
research
01/29/2022

Counterfactual Plans under Distributional Ambiguity

Counterfactual explanations are attracting significant attention due to ...
research
11/09/2020

Explaining Deep Graph Networks with Molecular Counterfactuals

We present a novel approach to tackle explainability of deep graph netwo...

Please sign up or login with your details

Forgot password? Click here to reset