Algorithmic Recourse in the Face of Noisy Human Responses

03/13/2022
by   Martin Pawelczyk, et al.
13

As machine learning (ML) models are increasingly being deployed in high-stakes applications, there has been growing interest in providing recourse to individuals adversely impacted by model predictions (e.g., an applicant whose loan has been denied). To this end, several post hoc techniques have been proposed in recent literature. These techniques generate recourses under the assumption that the affected individuals will implement the prescribed recourses exactly. However, recent studies suggest that individuals often implement recourses in a noisy and inconsistent manner - e.g., raising their salary by $505 if the prescribed recourse suggested an increase of $500. Motivated by this, we introduce and study the problem of recourse invalidation in the face of noisy human responses. More specifically, we theoretically and empirically analyze the behavior of state-of-the-art algorithms, and demonstrate that the recourses generated by these algorithms are very likely to be invalidated if small changes are made to them. We further propose a novel framework, EXPECTing noisy responses (EXPECT), which addresses the aforementioned problem by explicitly minimizing the probability of recourse invalidation in the face of noisy responses. Experimental evaluation with multiple real world datasets demonstrates the efficacy of the proposed framework, and supports our theoretical findings

READ FULL TEXT
research
02/26/2021

Towards Robust and Reliable Algorithmic Recourse

As predictive models are increasingly being deployed in high-stakes deci...
research
12/22/2020

Can I Still Trust You?: Understanding the Impact of Distribution Shifts on Algorithmic Recourses

As predictive models are being increasingly deployed to make a variety o...
research
11/12/2020

Ensuring Actionable Recourse via Adversarial Training

As machine learning models are increasingly deployed in high-stakes doma...
research
08/30/2022

On the Trade-Off between Actionable Explanations and the Right to be Forgotten

As machine learning (ML) models are increasingly being deployed in high-...
research
06/01/2022

RoCourseNet: Distributionally Robust Training of a Prediction Aware Recourse Model

Counterfactual (CF) explanations for machine learning (ML) models are pr...
research
09/15/2020

Interpretable and Interactive Summaries of Actionable Recourses

As predictive models are increasingly being deployed in high-stakes deci...
research
12/31/2015

Selecting Near-Optimal Learners via Incremental Data Allocation

We study a novel machine learning (ML) problem setting of sequentially a...

Please sign up or login with your details

Forgot password? Click here to reset