Statistical Guarantees for Fairness Aware Plug-In Algorithms

07/27/2021
by   Drona Khurana, et al.
0

A plug-in algorithm to estimate Bayes Optimal Classifiers for fairness-aware binary classification has been proposed in (Menon Williamson, 2018). However, the statistical efficacy of their approach has not been established. We prove that the plug-in algorithm is statistically consistent. We also derive finite sample guarantees associated with learning the Bayes Optimal Classifiers via the plug-in algorithm. Finally, we propose a protocol that modifies the plug-in approach, so as to simultaneously guarantee fairness and differential privacy with respect to a binary feature deemed sensitive.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/23/2022

Fairness-Aware Naive Bayes Classifier for Data with Multiple Sensitive Features

Fairness-aware machine learning seeks to maximise utility in generating ...
research
10/28/2022

Fairness Certificates for Differentially Private Classification

In this work, we theoretically study the impact of differential privacy ...
research
07/31/2023

A Suite of Fairness Datasets for Tabular Classification

There have been many papers with algorithms for improving fairness of ma...
research
02/20/2022

Bayes-Optimal Classifiers under Group Fairness

Machine learning algorithms are becoming integrated into more and more h...
research
02/16/2021

Constructing Multiclass Classifiers using Binary Classifiers Under Log-Loss

The construction of multiclass classifiers from binary classifiers is st...
research
11/11/2013

Learning Mixtures of Linear Classifiers

We consider a discriminative learning (regression) problem, whereby the ...
research
05/21/2015

On the relation between accuracy and fairness in binary classification

Our study revisits the problem of accuracy-fairness tradeoff in binary c...

Please sign up or login with your details

Forgot password? Click here to reset