Equal Improvability: A New Fairness Notion Considering the Long-term Impact

10/13/2022
by   Ozgur Guldogan, et al.
0

Devising a fair classifier that does not discriminate against different groups is an important problem in machine learning. Although researchers have proposed various ways of defining group fairness, most of them only focused on the immediate fairness, ignoring the long-term impact of a fair classifier under the dynamic scenario where each individual can improve its feature over time. Such dynamic scenarios happen in real world, e.g., college admission and credit loaning, where each rejected sample makes effort to change its features to get accepted afterwards. In this dynamic setting, the long-term fairness should equalize the samples' feature distribution across different groups after the rejected samples make some effort to improve. In order to promote long-term fairness, we propose a new fairness notion called Equal Improvability (EI), which equalizes the potential acceptance rate of the rejected samples across different groups assuming a bounded level of effort will be spent by each rejected sample. We analyze the properties of EI and its connections with existing fairness notions. To find a classifier that satisfies the EI requirement, we propose and study three different approaches that solve EI-regularized optimization problems. Through experiments on both synthetic and real datasets, we demonstrate that the proposed EI-regularized algorithms encourage us to find a fair classifier in terms of EI. Finally, we provide experimental results on dynamic scenarios which highlight the advantages of our EI metric in achieving the long-term fairness. Codes are available in a GitHub repository, see https://github.com/guldoganozgur/ei_fairness.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/07/2023

Equal Long-term Benefit Rate: Adapting Static Fairness Notions to Sequential Decision Making

Decisions made by machine learning models may have lasting impacts over ...
research
08/24/2022

Enforcing Delayed-Impact Fairness Guarantees

Recent research has shown that seemingly fair machine learning models, w...
research
03/12/2018

Delayed Impact of Fair Machine Learning

Fairness in machine learning has predominantly been studied in static cl...
research
04/04/2022

Achieving Long-Term Fairness in Sequential Decision Making

In this paper, we propose a framework for achieving long-term fair seque...
research
02/11/2021

Fairness-Aware Learning from Corrupted Data

Addressing fairness concerns about machine learning models is a crucial ...
research
10/12/2022

Equal Experience in Recommender Systems

We explore the fairness issue that arises in recommender systems. Biased...
research
05/08/2023

Runtime Monitoring of Dynamic Fairness Properties

A machine-learned system that is fair in static decision-making tasks ma...

Please sign up or login with your details

Forgot password? Click here to reset