Fair Sequential Selection Using Supervised Learning Models

10/26/2021
by   Mohammad Mahdi Khalili, et al.
0

We consider a selection problem where sequentially arrived applicants apply for a limited number of positions/jobs. At each time step, a decision maker accepts or rejects the given applicant using a pre-trained supervised learning model until all the vacant positions are filled. In this paper, we discuss whether the fairness notions (e.g., equal opportunity, statistical parity, etc.) that are commonly used in classification problems are suitable for the sequential selection problems. In particular, we show that even with a pre-trained model that satisfies the common fairness notions, the selection outcomes may still be biased against certain demographic groups. This observation implies that the fairness notions used in classification problems are not suitable for a selection problem where the applicants compete for a limited number of positions. We introduce a new fairness notion, “Equal Selection (ES),” suitable for sequential selection problems and propose a post-processing approach to satisfy the ES fairness notion. We also consider a setting where the applicants have privacy concerns, and the decision maker only has access to the noisy version of sensitive attributes. In this setting, we can show that the perfect ES fairness can still be attained under certain conditions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/07/2020

Improving Fairness and Privacy in Selection Problems

Supervised learning models have been increasingly used for making decisi...
research
06/24/2018

Equalizing Financial Impact in Supervised Learning

Notions of "fair classification" that have arisen in computer science ge...
research
09/16/2022

Survey on Fairness Notions and Related Tensions

Automated decision systems are increasingly used to take consequential d...
research
12/03/2020

FairBatch: Batch Selection for Model Fairness

Training a fair machine learning model is essential to prevent demograph...
research
06/29/2021

Non-Comparative Fairness for Human-Auditing and Its Relation to Traditional Fairness Notions

Bias evaluation in machine-learning based services (MLS) based on tradit...
research
10/12/2018

Interpretable Fairness via Target Labels in Gaussian Process Models

Addressing fairness in machine learning models has recently attracted a ...
research
07/06/2023

BaBE: Enhancing Fairness via Estimation of Latent Explaining Variables

We consider the problem of unfair discrimination between two groups and ...

Please sign up or login with your details

Forgot password? Click here to reset