Learning Fair Classifiers with Partially Annotated Group Labels

11/29/2021
by   Sangwon Jung, et al.
0

Recently, fairness-aware learning have become increasingly crucial, but we note that most of those methods operate by assuming the availability of fully annotated group-labels. We emphasize that such assumption is unrealistic for real-world applications since group label annotations are expensive and can conflict with privacy issues. In this paper, we consider a more practical scenario, dubbed as Algorithmic Fairness with the Partially annotated Group labels (Fair-PG). We observe that the existing fairness methods, which only use the data with group-labels, perform even worse than the vanilla training, which simply uses full data only with target labels, under Fair-PG. To address this problem, we propose a simple Confidence-based Group Label assignment (CGL) strategy that is readily applicable to any fairness-aware learning method. Our CGL utilizes an auxiliary group classifier to assign pseudo group labels, where random labels are assigned to low confident samples. We first theoretically show that our method design is better than the vanilla pseudo-labeling strategy in terms of fairness criteria. Then, we empirically show for UTKFace, CelebA and COMPAS datasets that by combining CGL and the state-of-the-art fairness-aware in-processing methods, the target accuracies and the fairness metrics are jointly improved compared to the baseline methods. Furthermore, we convincingly show that our CGL enables to naturally augment the given group-labeled dataset with external datasets only with target labels so that both accuracy and fairness metrics can be improved. We will release our implementation publicly to make future research reproduce our results.

READ FULL TEXT

page 4

page 8

research
02/12/2023

On Testing and Comparing Fair classifiers under Data Bias

In this paper, we consider a theoretical model for injecting data bias, ...
research
02/24/2022

A Fair Empirical Risk Minimization with Generalized Entropy

Recently a parametric family of fairness metrics to quantify algorithmic...
research
09/18/2020

Group Fairness by Probabilistic Modeling with Latent Fair Decisions

Machine learning systems are increasingly being used to make impactful d...
research
06/06/2022

FIFA: Making Fairness More Generalizable in Classifiers Trained on Imbalanced Data

Algorithmic fairness plays an important role in machine learning and imp...
research
01/29/2021

Beyond traditional assumptions in fair machine learning

This thesis scrutinizes common assumptions underlying traditional machin...
research
08/28/2023

Fair Few-shot Learning with Auxiliary Sets

Recently, there has been a growing interest in developing machine learni...
research
10/31/2020

Fair Classification with Group-Dependent Label Noise

This work examines how to train fair classifiers in settings where train...

Please sign up or login with your details

Forgot password? Click here to reset