DeepAI AI Chat
Log In Sign Up

Equalizing Recourse across Groups

09/07/2019
by   Vivek Gupta, et al.
THE UNIVERSITY OF UTAH
28

The rise in machine learning-assisted decision-making has led to concerns about the fairness of the decisions and techniques to mitigate problems of discrimination. If a negative decision is made about an individual (denying a loan, rejecting an application for housing, and so on) justice dictates that we be able to ask how we might change circumstances to get a favorable decision the next time. Moreover, the ability to change circumstances (a better education, improved credentials) should not be limited to only those with access to expensive resources. In other words, recourse for negative decisions should be considered a desirable value that can be equalized across (demographically defined) groups. This paper describes how to build models that make accurate predictions while still ensuring that the penalties for a negative outcome do not disadvantage different groups disproportionately. We measure recourse as the distance of an individual from the decision boundary of a classifier. We then introduce a regularized objective to minimize the difference in recourse across groups. We explore linear settings and further extend recourse to non-linear settings as well as model-agnostic settings where the exact distance from boundary cannot be calculated. Our results show that we can successfully decrease the unfairness in recourse while maintaining classifier performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

10/13/2020

FaiR-N: Fair and Robust Neural Networks for Structured Data

Fairness in machine learning is crucial when individuals are subject to ...
01/13/2023

Fairness and Sequential Decision Making: Limits, Lessons, and Opportunities

As automated decision making and decision assistance systems become comm...
11/29/2017

Paradoxes in Fair Computer-Aided Decision Making

Computer-aided decision making, where some classifier (e.g., an algorith...
10/03/2018

From Soft Classifiers to Hard Decisions: How fair can we be?

A popular methodology for building binary decision-making classifiers in...
08/19/2019

Learning Fair Classifiers in Online Stochastic Settings

In many real life situations, including job and loan applications, gatek...
09/18/2018

Actionable Recourse in Linear Classification

Classification models are often used to make decisions that affect human...