DeepAI AI Chat
Log In Sign Up

Improvement-Focused Causal Recourse (ICR)

by   Gunnar König, et al.

Algorithmic recourse recommendations, such as Karimi et al.'s (2021) causal recourse (CR), inform stakeholders of how to act to revert unfavourable decisions. However, some actions lead to acceptance (i.e., revert the model's decision) but do not lead to improvement (i.e., may not revert the underlying real-world state). To recommend such actions is to recommend fooling the predictor. We introduce a novel method, Improvement-Focused Causal Recourse (ICR), which involves a conceptual shift: Firstly, we require ICR recommendations to guide towards improvement. Secondly, we do not tailor the recommendations to be accepted by a specific predictor. Instead, we leverage causal knowledge to design decision systems that predict accurately pre- and post-recourse. As a result, improvement guarantees translate into acceptance guarantees. We demonstrate that given correct causal knowledge, ICR, in contrast to existing approaches, guides towards both acceptance and improvement.


page 1

page 2

page 3

page 4


A Causal Perspective on Meaningful and Robust Algorithmic Recourse

Algorithmic recourse explanations inform stakeholders on how to act to r...

Algorithmic recourse under imperfect causal knowledge: a probabilistic approach

Recent work has discussed the limitations of counterfactual explanations...

Learning From Strategic Agents: Accuracy, Improvement, and Causality

In many predictive decision-making scenarios, such as credit scoring and...

Causal datasheet: An approximate guide to practically assess Bayesian networks in the real world

In solving real-world problems like changing healthcare-seeking behavior...

Causally Invariant Predictor with Shift-Robustness

This paper proposes an invariant causal predictor that is robust to dist...

Extracting Incentives from Black-Box Decisions

An algorithmic decision-maker incentivizes people to act in certain ways...