DeepAI AI Chat
Log In Sign Up

A Causal Perspective on Meaningful and Robust Algorithmic Recourse

by   Gunnar König, et al.

Algorithmic recourse explanations inform stakeholders on how to act to revert unfavorable predictions. However, in general ML models do not predict well in interventional distributions. Thus, an action that changes the prediction in the desired way may not lead to an improvement of the underlying target. Such recourse is neither meaningful nor robust to model refits. Extending the work of Karimi et al. (2021), we propose meaningful algorithmic recourse (MAR) that only recommends actions that improve both prediction and target. We justify this selection constraint by highlighting the differences between model audit and meaningful, actionable recourse explanations. Additionally, we introduce a relaxation of MAR called effective algorithmic recourse (EAR), which, under certain assumptions, yields meaningful recourse by only allowing interventions on causes of the target.


page 1

page 2

page 3

page 4


Causes and Explanations in the Structural-Model Approach: Tractable Cases

In this paper, we continue our research on the algorithmic aspects of Ha...

Improvement-Focused Causal Recourse (ICR)

Algorithmic recourse recommendations, such as Karimi et al.'s (2021) cau...

Active Invariant Causal Prediction: Experiment Selection through Stability

A fundamental difficulty of causal learning is that causal models can ge...

Causal Explanations and XAI

Although standard Machine Learning models are optimized for making predi...

Outsider Oversight: Designing a Third Party Audit Ecosystem for AI Governance

Much attention has focused on algorithmic audits and impact assessments ...