Pairwise Fairness for Ranking and Regression

06/12/2019
by   Harikrishna Narasimhan, et al.
0

We present pairwise metrics of fairness for ranking and regression models that form analogues of statistical fairness notions such as equal opportunity or equal accuracy, as well as statistical parity. Our pairwise formulation supports both discrete protected groups, and continuous protected attributes. We show that the resulting training problems can be efficiently and effectively solved using constrained optimization and robust optimization techniques based on two player game algorithms developed for fair classification. Experiments illustrate the broad applicability and trade-offs of these methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/14/2017

Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness

The most prevalent notions of fairness in machine learning are statistic...
research
10/29/2021

A Pre-processing Method for Fairness in Ranking

Fair ranking problems arise in many decision-making processes that often...
research
04/28/2020

Genetic programming approaches to learning fair classifiers

Society has come to rely on algorithms like classifiers for important de...
research
05/07/2021

Pairwise Fairness for Ordinal Regression

We initiate the study of fairness for ordinal regression, or ordinal cla...
research
06/27/2019

Learning Fair Representations for Kernel Models

Fair representations are a powerful tool for establishing criteria like ...
research
05/30/2019

Fair Regression: Quantitative Definitions and Reduction-based Algorithms

In this paper, we study the prediction of a real-valued target, such as ...
research
09/19/2016

Inherent Trade-Offs in the Fair Determination of Risk Scores

Recent discussion in the public sphere about algorithmic classification ...

Please sign up or login with your details

Forgot password? Click here to reset