Taking Advantage of Multitask Learning for Fair Classification

10/19/2018
by   Luca Oneto, et al.
0

A central goal of algorithmic fairness is to reduce bias in automated decision making. An unavoidable tension exists between accuracy gains obtained by using sensitive information (e.g., gender or ethnic group) as part of a statistical model, and any commitment to protect these characteristics. Often, due to biases present in the data, using the sensitive information in the functional form of a classifier improves classification accuracy. In this paper we show how it is possible to get the best of both worlds: optimize model accuracy and fairness without explicitly using the sensitive feature in the functional form of the model, thereby treating different individuals equally. Our method is based on two key ideas. On the one hand, we propose to use Multitask Learning (MTL), enhanced with fairness constraints, to jointly learn group specific classifiers that leverage information between sensitive groups. On the other hand, since learning group specific models might not be permitted, we propose to first predict the sensitive features by any learning method and then to use the predicted sensitive feature to train MTL with fairness constraints. This enables us to tackle fairness with a three-pronged approach, that is, by increasing accuracy on each group, enforcing measures of fairness during training, and protecting sensitive information during testing. Experimental results on two real datasets support our proposal, showing substantial improvements in both accuracy and fairness.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/20/2023

Within-group fairness: A guidance for more sound between-group fairness

As they have a vital effect on social decision-making, AI algorithms not...
research
05/11/2023

A statistical approach to detect sensitive features in a group fairness setting

The use of machine learning models in decision support systems with high...
research
12/29/2021

EiFFFeL: Enforcing Fairness in Forests by Flipping Leaves

Nowadays Machine Learning (ML) techniques are extensively adopted in man...
research
01/30/2019

Noise-tolerant fair classification

Fair machine learning concerns the analysis and design of learning algor...
research
02/08/2022

Group Fairness Is Not Derivable From Justice: a Mathematical Proof

We argue that an imperfect criminal law procedure cannot be group-fair, ...
research
03/04/2022

FairPrune: Achieving Fairness Through Pruning for Dermatological Disease Diagnosis

Many works have shown that deep learning-based medical image classificat...
research
07/28/2019

Wasserstein Fair Classification

We propose an approach to fair classification that enforces independence...

Please sign up or login with your details

Forgot password? Click here to reset