Wasserstein Fair Classification

07/28/2019
by   Ray Jiang, et al.
7

We propose an approach to fair classification that enforces independence between the classifier outputs and sensitive information by minimizing Wasserstein-1 distances. The approach has desirable theoretical properties and is robust to specific choices of the threshold used to obtain class predictions from model outputs. We introduce different methods that enable hiding sensitive information at test time or have a simple and fast implementation. We show empirical performance against different fairness baselines on several benchmark fairness datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/19/2020

Fair Inputs and Fair Outputs: The Incompatibility of Fairness in Privacy and Accuracy

Fairness concerns about algorithmic decision-making systems have been ma...
research
03/06/2018

A Reductions Approach to Fair Classification

We present a systematic approach for achieving fairness in a binary clas...
research
08/01/2022

GetFair: Generalized Fairness Tuning of Classification Models

We present GetFair, a novel framework for tuning fairness of classificat...
research
07/18/2020

A Distributionally Robust Approach to Fair Classification

We propose a distributionally robust logistic regression model with an u...
research
10/19/2018

Taking Advantage of Multitask Learning for Fair Classification

A central goal of algorithmic fairness is to reduce bias in automated de...
research
11/12/2019

Fairness-Aware Neural Réyni Minimization for Continuous Features

The past few years have seen a dramatic rise of academic and societal in...
research
03/11/2021

Wasserstein Robust Support Vector Machines with Fairness Constraints

We propose a distributionally robust support vector machine with a fairn...

Please sign up or login with your details

Forgot password? Click here to reset