On preserving non-discrimination when combining expert advice

10/28/2018
by   Avrim Blum, et al.
0

We study the interplay between sequential decision making and avoiding discrimination against protected groups, when examples arrive online and do not follow distributional assumptions. We consider the most basic extension of classical online learning: "Given a class of predictors that are individually non-discriminatory with respect to a particular metric, how can we combine them to perform as well as the best predictor, while preserving non-discrimination?" Surprisingly we show that this task is unachievable for the prevalent notion of "equalized odds" that requires equal false negative rates and equal false positive rates across groups. On the positive side, for another notion of non-discrimination, "equalized error rates", we show that running separate instances of the classical multiplicative weights algorithm for each group achieves this guarantee. Interestingly, even for this notion, we show that algorithms with stronger performance guarantees than multiplicative weights cannot preserve non-discrimination.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/19/2019

Learning Fair Classifiers in Online Stochastic Settings

In many real life situations, including job and loan applications, gatek...
research
05/11/2022

Is calibration a fairness requirement? An argument from the point of view of moral philosophy and decision theory

In this paper, we provide a moral analysis of two criteria of statistica...
research
09/17/2019

AdaFair: Cumulative Fairness Adaptive Boosting

The widespread use of ML-based decision making in domains with high soci...
research
07/05/2021

Efficient First-Order Contextual Bandits: Prediction, Allocation, and Triangular Discrimination

A recurring theme in statistical learning, online learning, and beyond i...
research
09/28/2018

Active Fairness in Algorithmic Decision Making

Society increasingly relies on machine learning models for automated dec...
research
07/05/2017

The impossibility of "fairness": a generalized impossibility result for decisions

Various measures can be used to estimate bias or unfairness in a predict...
research
09/18/2019

Advancing subgroup fairness via sleeping experts

We study methods for improving fairness to subgroups in settings with ov...

Please sign up or login with your details

Forgot password? Click here to reset