Implementing Fair Regression In The Real World

04/09/2021
by   Boris Ruf, et al.
0

Most fair regression algorithms mitigate bias towards sensitive sub populations and therefore improve fairness at group level. In this paper, we investigate the impact of such implementation of fair regression on the individual. More precisely, we assess the evolution of continuous predictions from an unconstrained to a fair algorithm by comparing results from baseline algorithms with fair regression algorithms for the same data points. Based on our findings, we propose a set of post-processing algorithms to improve the utility of the existing fair regression approaches.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/31/2021

Priority-based Post-Processing Bias Mitigation for Individual and Group Fairness

Previous post-processing bias mitigation algorithms on both group and in...
research
08/25/2023

Optimizing Group-Fair Plackett-Luce Ranking Models for Relevance and Ex-Post Fairness

In learning-to-rank (LTR), optimizing only the relevance (or the expecte...
research
10/08/2021

Fair Regression under Sample Selection Bias

Recent research on fair regression focused on developing new fairness no...
research
07/04/2019

Fair Kernel Regression via Fair Feature Embedding in Kernel Space

In recent years, there have been significant efforts on mitigating uneth...
research
02/12/2019

Effects of empathy on the evolution of fairness in group-structured populations

The ultimatum game has been a prominent paradigm in studying the evoluti...
research
05/31/2021

Rawlsian Fair Adaptation of Deep Learning Classifiers

Group-fairness in classification aims for equality of a predictive utili...
research
04/17/2020

An Asynchronous Computability Theorem for Fair Adversaries

This paper proposes a simple topological characterization of a large cla...

Please sign up or login with your details

Forgot password? Click here to reset