Pairwise Fairness for Ordinal Regression

by   Matthäus Kleindessner, et al.

We initiate the study of fairness for ordinal regression, or ordinal classification. We adapt two fairness notions previously considered in fair ranking and propose a strategy for training a predictor that is approximately fair according to either notion. Our predictor consists of a threshold model, composed of a scoring function and a set of thresholds, and our strategy is based on a reduction to fair binary classification for learning the scoring function and local search for choosing the thresholds. We can control the extent to which we care about the accuracy vs the fairness of the predictor via a parameter. In extensive experiments we show that our strategy allows us to effectively explore the accuracy-vs-fairness trade-off and that it often compares favorably to "unfair" state-of-the-art methods for ordinal regression in that it yields predictors that are only slightly less accurate, but significantly more fair.


page 1

page 2

page 3

page 4


Strategyproof and Approximately Maxmin Fair Share Allocation of Chores

We initiate the work on fair and strategyproof allocation of indivisible...

Pairwise Fairness for Ranking and Regression

We present pairwise metrics of fairness for ranking and regression model...

A Convex Framework for Fair Regression

We introduce a flexible family of fairness regularizers for (linear and ...

Fair assignment of indivisible objects under ordinal preferences

We consider the discrete assignment problem in which agents express ordi...

Learning Optimal Fair Classification Trees

The increasing use of machine learning in high-stakes domains – where pe...

Fairness Through Regularization for Learning to Rank

Given the abundance of applications of ranking in recent years, addressi...

Fair When Trained, Unfair When Deployed: Observable Fairness Measures are Unstable in Performative Prediction Settings

Many popular algorithmic fairness measures depend on the joint distribut...