Learning Fair Classifiers: A Regularization-Inspired Approach

06/30/2017
by   Yahav Bechavod, et al.
0

We present a regularization-inspired approach for reducing bias in learned classifiers. In particular, we focus on binary classification tasks over individuals from two populations, where, as our criterion for fairness, we wish to achieve similar false positive rates in both populations, and similar false negative rates in both populations. As a proof of concept, we implement our approach and empirically evaluate its ability to achieve both fairness and accuracy, using the COMPAS scores data for prediction of recidivism.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/19/2022

Fairness-aware Model-agnostic Positive and Unlabeled Learning

With the increasing application of machine learning in high-stake decisi...
research
10/21/2020

How to Control the Error Rates of Binary Classifiers

The traditional binary classification framework constructs classifiers w...
research
04/22/2019

Tracking and Improving Information in the Service of Fairness

As algorithmic prediction systems have become widespread, fears that the...
research
09/18/2019

Advancing subgroup fairness via sleeping experts

We study methods for improving fairness to subgroups in settings with ov...
research
06/19/2023

Insufficiently Justified Disparate Impact: A New Criterion for Subgroup Fairness

In this paper, we develop a new criterion, "insufficiently justified dis...
research
05/25/2019

Average Individual Fairness: Algorithms, Generalization and Experiments

We propose a new family of fairness definitions for classification probl...
research
05/21/2015

On the relation between accuracy and fairness in binary classification

Our study revisits the problem of accuracy-fairness tradeoff in binary c...

Please sign up or login with your details

Forgot password? Click here to reset