1 Introduction
Predictive systems are increasingly deployed in highstakes environments such as hiring (Raghavan et al., 2020), recidivism prediction (Zeng et al., 2015) or loan approval (Van Liebergen and others, 2017). To enable individuals to revert unfavorable decisions, a range of work develops tools that offer individuals possibilities for algorithmic recourse (Wachter et al., 2018; Dandl et al., 2020; Karimi et al., 2020b, 2021). When suggesting actions for recourse, it is desirable that recommendations are robust to shifts, meaning that they can be honored if acted upon (Venkatasubramanian and Alfano, 2020). We argue that they should also be meaningful, such that not only the prediction, but also the underlying target is improved.
We take a causal perspective on the issue at hand and argue that robustness and meaningfulness are related problems since nonmeaningful recourse itself leads to distribution shift.
Let us consider a simple motivational example^{1}^{1}1The example is inspired by (Shavit et al., 2020). illustrated in Figure 1. The goal is to predict the insurance risk of a car. In addition to the direct cause of whether the car is driven by the car owner (green), the confounded variable minivan is observed (blue). The latent confounder driver defensiveness cannot be observed. The ML model learns to exploit not only the direct cause (green) but also the associated variable (blue). Algorithmic recourse actions on the model may therefore suggest explainees to game the predictor by intervening on the noncausal variable minivan, thereby affecting the prediction without actually improving the insurance risk.
In the distribution of agents that have gamed the prediction model, the association of minivan with the prediction target is broken. A refitted model would adapt accordingly and the recourse would not be honored.
Miller et al. (2020) suggest to adapt the model such that gaming is not incentivized, which may come at the cost of predictive performance (Shavit et al., 2020). Instead we suggest to tackle the problem in the explanation domain. We propose Meaningful Algorithmic Recourse (MAR) (Section 4), which restricts recourse recommendations to meaningful actions that alter both the prediction and the target coherently. We justify the restriction by separating two goals: contestability and recourse (Section 5). While stakeholders that seek explanations for model audit (contestability) should be given full access to the model, agents that seek to revert an unfavorable outcome (recourse) should only be offered recourse options that are meaningful. A relaxation of the meaningfulness restriction is introduced in Section 6.
2 Related Work and Contributions
Causal Perspective on Strategic Modeling: The related field of strategic modeling investigates how the prediction mechanism incentivizes rational agents. Miller et al. (2020) thereby distinguish models that incentivize gaming, i.e., interventions that affect the prediction but not the underlying target in the desired way, and improvement, actions that yield the desired change both in and .
Building on this distinction, (Shavit et al., 2020) elaborate that except for cases where all causes can be measured, the following three goals are in conflict: incentivizing improvement, predictive accuracy and retrieving the true underlying mechanism.
Robust Algorithmic Recourse: Barocas et al. (2020); Venkatasubramanian and Alfano (2020) argue that counterfactual explanations (CE) assume the model to be stable over time, but that recourse should be guaranteed even if the model changes. In a similar vein, Wachter et al. (2018) suggest guaranteeing recourse based on a counterfactual within a prespecified period of time.
The robustness of CE and recourse has been investigated before (Rawal et al., 2020; Upadhyay et al., 2021; Pawelczyk et al., 2020), yet only with respect to generic shifts. To the best of our knowledge, we are the first to suggest recommendations that are robust to the shift induced by recourse itself.
Contributions: We suggest to restrict algorithmic recourse to recommend meaningful actions that improve prediction and outcome coherently. In contrast to work in the strategic modeling literature (Miller et al., 2020), our approach does not require to adapt the model at the cost of predictive performance. We justify the restriction by distinguishing two explanation goals: model audit and algorithmic recourse. Furthermore, we suggest a relaxation of MAR that does not require computing the structural counterfactual for . We derive assumptions given which the relaxation guarantees meaningful recourse.
3 Background and Notation
3.1 Causal Data Model
We model the data generating process using a structural causal model (SCM) (Pearl, 2009; Peters et al., 2018). The model consists of the endogenous variables , the mutually independent exogenous variables and a sequence of structural equations . The SCM entails a directed graph . The structural equations specify how is determined from . The index set of endogenous variables is denoted as , the set of observed variables as . The Markov blanket is the minimal subset of , such that .^{2}^{2}2Sometimes the MB is defined as minimal separating set.
3.2 Actionable Recourse
Following (Karimi et al., 2021), we model actions as structural interventions . In this framework, an action can be constructed as where is the index set of features to be intervened upon.
Assuming invertability of , the effect of an intervention for an individual can be determined using structural counterfactuals that are computed in three steps (Pearl, 2009): First, the factual distribution of exogenous variables given the endogenous variables is computed (abduction), i.e., . Second, the structural interventions corresponding to , yielding are performed (action). Finally, the counterfactuals are predicted .
The optimization problem for recourse through minimal interventions (Karimi et al., 2021) is given by
where is the action space, , is the predictor and a threshold. Further constraints have been suggested, e.g., or .
3.3 Generalizability and Intervention Stability
We leverage necessary conditions for invariant conditional distributions as derived in (Pfister et al., 2019). The authors introduce a separation based intervention stability criterion that is applied to modified version of . For every intervened upon variable an auxiliary intervention variable, denoted as , is added as direct cause of , yielding . The intervention variable can be seen as a switch between different mechanisms. A set is called intervention stable with respect to all actions if for all intervened upon variables (where ) the separation^{3}^{3}3Background on dseparation in Appendix B.1. holds in . The authors show that intervention stability implies an invariant condition distribution, i.e., for all actions with it holds that (Pfister et al. (2019), Appendix A).
4 Meaningful Algorithmic Recourse
Algorithmic recourse searches optimal actions by assessing the prediction over a range of interventions.
However, since predictive models are designed to be employed in a static observational distribution, they often fail to generalize to interventional environments:
For example, predictive models exploit all useful associations with the target, irrespective of the type of causal relationship between feature and target.
As a consequence, variables that are not causal for the target can be causal for the prediction (Molnar et al., 2020). Interventions on such noncausal features may flip the prediction, but do not affect the underlying target. Thus, the model’s performance is not stable under such interventions.
This lack of intervention stability is problematic for two reasons: Firstly, following a recourse recommendations might not lead to improvement but rather game the predictor. Secondly, a refit with access to the postrecourse distribution in which the exploited associations are weakened will not honor the original recourse recommendation.
The related field of strategic prediction aims to adapt the model to strategic agent behavior (Miller et al., 2020). However, the approach has a catch: As Shavit et al. (2020) argue, designing the model to incentivize agent outcome (improvement) is often in conflict with achieving optimal predictive accuracy. For instance, in the example in Section 1 the model’s reliance on minivan would need to be to be reduced to incentivize improvement.
We propose an alternative: Instead of altering the model such that gaming is not lucrative, we allow the model to use gameable associations but constrain algorithmic recourse recommendations to those that are meaningful (Definition 4). Therefore we require knowledge of the full SCM generating (observed) feature and target variables.
Assumption 1
We assume knowledge of the SCM that generates and . Furthermore, we assume the existence of such that (with ).
Definition 1
Meaningful actionable recourse (MAR) is algorithmic recourse (Karimi et al., 2021), with the additional constraint that the underlying target is improved coherently, i.e., where . ^{4}^{4}4The assumption implies that an exists such that .
Naturally, such a restriction reduces the insight that we gain into the model. However, in our view, depending on the goal of the explainee full model insight is not required. Therefore, we distinguish two tales of algorithmic recourse in Section 5. Since we may not have access to the causal model required to compute MAR we propose an alternative formulation of MAR that only relies on the predictor (Section 6).
5 The two tales of algorithmic recourse
Machine learning explanations as suggested by (Karimi et al., 2020b, 2021) may be used for two distinct purposes—for model audit and for meaningful, actionable recourse.
Model auditors aim to make sure that models meet desired standards (e.g., fairness) and extrapolate well to unseen regions. Model audit explanations can allow inspectors to contest model decisions, suggest modeldebugging strategies or give insight into model behavior within and outside of the data distribution (Wachter et al., 2018; Freiesleben, 2020). Hence, these explanations must be maximally faithful to the prediction model.
Recourse recommendations on the other side need to satisfy various side constraints that are not related to the model. Even the causal dependencies between variables that are taken into account in algorithmic recourse are not reflected in the prediction model (Karimi et al., 2021). Recourse recommendations must also be actionable for the explainee, thus, changes in nonactionable features like age, ethnicity, or height are commonly prohibited (Ustun et al., 2019; Karimi et al., 2021). Moreover, recourse recommendation must be plausible, i.e., make realistic suggestions that are jointly satisfiable and prefer sparse over widespread action recommendations (Karimi et al., 2020a; Dandl et al., 2020).
In conclusion, model audit explanations are more complete and faithful to the model while recourse recommendations are more selective, faithful to the underlying process and account for the limitations of the datasubject. We believe that the selectivity and reliance of recourse recommendations on factors beside the model itself is not a limitation but essential to make explanations more relevant to the datasubject. In the same vein, we see MAR as another step towards making recourse recommendations more meaningful.
If however a datasubject is more interested in contesting or auditing the algorithmic decision, recourse recommendations are not suitable. Instead, we suggest that datasubjects should additionally receive model audit explanations upon request.
6 Formulation based on Effective Intervention Constraint
Even if we have access to the SCM modeling , we may not know the structural equation generating . Over the course of this Section we therefore introduce Effective actionable recourse (EAR) as an alternative. Instead of computing whether the counterfacutal is flipped as desired (MAR), EAR restricts actions to exclusively intervene on causes of . Since any meaningful recourse recommendation intervenes on causes exclusively, the constraint does not exclude any recommendation. And, as we demonstrate in this Section, access to the full causal graph can suffice to ensure robustness of the model to interventions on causes of and therefore meaningful recourse.
Definition 2
Effective actionable recourse (EAR) is algorithmic recourse (Karimi et al., 2021) with the further constraint that only effective actions are allowed, i.e. that
We leverage research on invariant prediction (Pfister et al., 2019) (Section 3.3) to formalize under which assumptions EAR provides meaningful recourse. For the recommendation to lead to improvement, the model has to predict accurately in the respective action distribution. Since intervention stability implies invariant conditionals (Section 3.3), for predictors that rely on intervention stable sets, EAR recommendations are as likely to lead to improvement as the predictor is able to correctly predict in the prerecourse distribution.
In Figure 2 we illustrate under which interventions the minimal optimal set, the socalled Markov blanket, is intervention stable. Thereby, the set of observed variables plays a crucial role. For example, if the minimal set of variables in that separates all remaining variables from is observed, the markov blanket is intervention stable (Proposition 1). In contrast, an intervention on unobserved direct causes (e.g., ) (as well as interventions on noncausal variables) may alter the conditional .
Proposition 1
If all endogenous direct causes, direct children and spouses are observed, the markov blanket is stable with respect to interventions on all endogenous causes of .
If the predictor is perfect and intervention stable, meaning that , EAR affects prediction and target coherently.
Proposition 2
Assuming a perfect predictor (i.e., in the prerecourse distribution) and that the predictor relies on a variable set that is stable with respect to interventions on actionable causes of , for EAR recommendations with it holds that .
Even if the conditional distribution is invariant, the joint distribution of the variables is most certainly affected if individuals act on recourse recommendations. If the interventional joint distribution extends the support of the observation distribution, the model would be forced to predict outside of the training distribution. EAR recommendations that extrapolate should therefore be handled with care.
If the conditional is stable and the predictor is able to perfectly predict under any action that may be recommended by recourse, ceteris paribus the recourse will be honored by an optimal refit of the model on pre and postrecourse data.
7 Limitations and Discussion
In order to generate more robust and meaningful recourse recommendations affecting not only but also , we introduce meaningful algorithmic recourse (MAR) and a relaxation called effective algorithmic recourse (EAR).
Our approach is based on strong assumptions: We require that can be perfectly predicted from features and assume existence and knowledge of an underlying SCM with invertible structural equations. EAR requires that the model is stable w.r.t. interventions on causes. Evaluating the intervention stability requires knowledge of the causal graph.
Furthermore, one may argue that explanations should be maximally faithful to the model. Then, the gameability should be exposed. However, we argue that recommending gaming is problematic for both model authorities and individuals who seek robust recommendations that help them to improve. To reconcile both positions, we recommend offering additional model audit explanations upon request, but to only offer guarantees for MAR. In order to provide a variety of options, model authorities should aim to observe causes rather than noncausal variables. If the model is able to predict accurately based on causes alone, model audit and meaningful, actionable recourse converge.
It could be insisted that even if the conditional is invariant to MAR, the actions induce a shift in the distribution of Y. Indeed, if the threshold acts as a gatekeeper towards a limited good, the threshold itself may shift as a result. Consequently, even if the underlying target Y improves as desired, recourse may not be honored.
Further research is required to transfer the results into a probabilistic setting with imperfect causal knowledge. In such settings the robustness of recourse with respect to refits is particularly challenging since recourse may be applied selectively and thereby can open additional dependence paths that allow for an improved but different predictor.
We see our work as a first step towards more meaningful and robust recourse.
Acknowledgments
This project is funded by the German Federal Ministry of Education and Research (BMBF) under Grant No. 01IS18036A and the Graduate School of Systemic Neurosciences (GSN) Munich.
References
 The hidden assumptions behind counterfactual explanations and principal reasons. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 80–89. Cited by: §2.
 Multiobjective counterfactual explanations. In International Conference on Parallel Problem Solving from Nature, pp. 448–469. Cited by: §1, §5.
 Counterfactual explanations & adversarial examples–common grounds, essential differences, and potential transfers. arXiv preprint arXiv:2009.05487. Cited by: §5.

Identifying independence in bayesian networks
. Networks 20 (5), pp. 507–534. Cited by: §B.1. 
Modelagnostic counterfactual explanations for consequential decisions.
In
International Conference on Artificial Intelligence and Statistics
, pp. 895–905. Cited by: §5.  Algorithmic recourse: from counterfactual explanations to interventions. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pp. 353–362. Cited by: A Causal Perspective on Meaningful and Robust Algorithmic Recourse, §1, §3.2, §5, Definition 1, Definition 2.
 Algorithmic recourse under imperfect causal knowledge: a probabilistic approach. In Advances in Neural Information Processing Systems, H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin (Eds.), Vol. 33, pp. 265–277. Cited by: §1, §5.
 Strategic classification is causal modeling in disguise. In International Conference on Machine Learning, pp. 6917–6926. Cited by: §1, §2, §4.
 Pitfalls to Avoid when Interpreting Machine Learning Models. ICML workshop on XAI. Cited by: §4.
 On Counterfactual Explanations under Predictive Multiplicity. In Conference on Uncertainty in Artificial Intelligence, pp. 809–818. Cited by: §2.
 Causality : models, reasoning, and inference.. pp. 487. Note: ISBN: 9780521895606 Cited by: §3.1, §3.2.
 Elements of Causal Inference  Foundations and Learning Algorithms. Note: Issue: December External Links: ISBN 9780262037310 Cited by: §A.1, §3.1.
 Stabilizing variable selection and regression. arXiv preprint arXiv:1911.01850. Cited by: §A.1, §3.3, §6, footnote 5.
 Mitigating bias in algorithmic hiring: evaluating claims and practices. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAT* ’20, New York, NY, USA, pp. 469–481. External Links: ISBN 9781450369367 Cited by: §1.
 Can I Still Trust You?: Understanding the Impact of Distribution Shifts on Algorithmic Recourses. arXiv preprint arXiv:2012.11788. Cited by: §2.

Causal Strategic Linear Regression
. In Proceedings of the 37th International Conference on Machine Learning, H. D. III and A. Singh (Eds.), Proceedings of Machine Learning Research, Vol. 119, pp. 8676–8686. Cited by: §1, §2, §4, footnote 1.  Causation, Prediction and Search. External Links: ISBN 0262194406 Cited by: §B.1.
 Towards Robust and Reliable Algorithmic Recourse. arXiv preprint arXiv:2102.13620. Cited by: §2.
 Actionable recourse in linear classification. In Proceedings of the Conference on Fairness, Accountability, and Transparency, pp. 10–19. Cited by: §5.
 Machine learning: a revolution in risk management and compliance?. Journal of Financial Transformation 45, pp. 60–67. Cited by: §1.
 The philosophical basis of algorithmic recourse. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 284–293. Cited by: §1, §2.
 Counterfactual Explanations without opening the black box: automated decisions and the GDPR. pp. 1–52. Cited by: §1, §2, §5.
 Interpretable classification models for recidivism prediction. arXiv preprint arXiv:1503.07810. Cited by: §1.
Appendix A Proofs
a.1 Proof of Proposition 1
If all endogenous direct causes, direct children and spouses are observed, the conditional is stable with respect to interventions on any set of endogenous causes of .
We prove the statement in four steps.
Given a graph and an endogenous , the set of endogeneous direct parents, direct effects and direct parents of effects are the minimal separating set : Standard result, see e.g. Peters et al. (2018), Proposition 6.27.
The set in the augmented graph coincides with : The minimal separating set contains direct causes, direct effects and direct parents of direct effects. is never a direct cause of . Also, since has no endogenous causes, it cannot be a direct effect. Furthermore, since we restrict interventions to be performed on causes, cannot be a direct parent of a direct effect.
is intervention stable: As follows, all intervention variables are separated from in by . Therefore is intervention stable.
Then also the markov blanket is intervention stable: Since separation implies independence . Therefore, if then also . If any element it holds that , then it must hold that . Therefore, if then also and therefore any independence entailed by also holds for . Since (Pfister et al., 2019) only require the independence that is implied by separation in their invariant conditional proof, the same implication holds for the . As follows, is invariant with respect to interventions on any set of endogenous causes.
a.2 Proof of Proposition 2
Assuming a perfect predictor (i.e. in the prerecourse distribution) and that the predictor relies on a variable set that is stable with respect to interventions on actionable causes of , for EAR recommendations with it holds that .
Since is intervention stable with respect to all actions , for any two actions it holds that ^{5}^{5}5For a proof please refer to (Pfister et al., 2019), Appendix A. and therefore if . Since the predictor works perfectly under the nullaction (no interventions), if , which proves the statement.
Comments
There are no comments yet.