Hyper-parameter Tuning for Fair Classification without Sensitive Attribute Access

02/02/2023
by   Akshaj Kumar Veldanda, et al.
2

Fair machine learning methods seek to train models that balance model performance across demographic subgroups defined over sensitive attributes like race and gender. Although sensitive attributes are typically assumed to be known during training, they may not be available in practice due to privacy and other logistical concerns. Recent work has sought to train fair models without sensitive attributes on training data. However, these methods need extensive hyper-parameter tuning to achieve good results, and hence assume that sensitive attributes are known on validation data. However, this assumption too might not be practical. Here, we propose Antigone, a framework to train fair classifiers without access to sensitive attributes on either training or validation data. Instead, we generate pseudo sensitive attributes on the validation data by training a biased classifier and using the classifier's incorrectly (correctly) labeled examples as proxies for minority (majority) groups. Since fairness metrics like demographic parity, equal opportunity and subgroup accuracy can be estimated to within a proportionality constant even with noisy sensitive attribute information, we show theoretically and empirically that these proxy labels can be used to maximize fairness under average accuracy constraints. Key to our results is a principled approach to select the hyper-parameters of the biased classifier in a completely unsupervised fashion (meaning without access to ground truth sensitive attributes) that minimizes the gap between fairness estimated using noisy versus ground-truth sensitive labels.

READ FULL TEXT
research
03/30/2022

Learning Fair Models without Sensitive Attributes: A Generative Approach

Most existing fair classifiers rely on sensitive attributes to achieve f...
research
07/24/2023

Fairness Under Demographic Scarce Regime

Most existing works on fairness assume the model has full access to demo...
research
06/07/2023

Migrate Demographic Group For Fair GNNs

Graph Neural networks (GNNs) have been applied in many scenarios due to ...
research
12/13/2022

Fair Infinitesimal Jackknife: Mitigating the Influence of Biased Training Data Points Without Refitting

In consequential decision-making applications, mitigating unwanted biase...
research
06/08/2018

Blind Justice: Fairness with Encrypted Sensitive Attributes

Recent work has explored how to train machine learning models which do n...
research
08/01/2022

GetFair: Generalized Fairness Tuning of Classification Models

We present GetFair, a novel framework for tuning fairness of classificat...
research
10/05/2019

The Impact of Data Preparation on the Fairness of Software Systems

Machine learning models are widely adopted in scenarios that directly af...

Please sign up or login with your details

Forgot password? Click here to reset