FIFA: Making Fairness More Generalizable in Classifiers Trained on Imbalanced Data

06/06/2022
by   Zhun Deng, et al.
0

Algorithmic fairness plays an important role in machine learning and imposing fairness constraints during learning is a common approach. However, many datasets are imbalanced in certain label classes (e.g. "healthy") and sensitive subgroups (e.g. "older patients"). Empirically, this imbalance leads to a lack of generalizability not only of classification, but also of fairness properties, especially in over-parameterized models. For example, fairness-aware training may ensure equalized odds (EO) on the training data, but EO is far from being satisfied on new users. In this paper, we propose a theoretically-principled, yet Flexible approach that is Imbalance-Fairness-Aware (FIFA). Specifically, FIFA encourages both classification and fairness generalization and can be flexibly combined with many existing fair learning methods with logits-based losses. While our main focus is on EO, FIFA can be directly applied to achieve equalized opportunity (EqOpt); and under certain conditions, it can also be applied to other fairness notions. We demonstrate the power of FIFA by combining it with a popular fair classification algorithm, and the resulting algorithm achieves significantly better fairness generalization on several real-world datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/13/2018

Fairness-aware Classification: Criterion, Convexity, and Bounds

Fairness-aware classification is receiving increasing attention in the m...
research
03/11/2021

Fair Mixup: Fairness via Interpolation

Training classifiers under fairness constraints such as group fairness, ...
research
09/21/2021

Fairness-aware Class Imbalanced Learning

Class imbalance is a common challenge in many NLP tasks, and has clear c...
research
06/22/2019

An Incentive Security Model to Provide Fairness for Peer-to-Peer Networks

Peer-to-Peer networks are designed to rely on resources of their own use...
research
01/27/2023

Variance, Self-Consistency, and Arbitrariness in Fair Classification

In fair classification, it is common to train a model, and to compare an...
research
02/03/2022

FORML: Learning to Reweight Data for Fairness

Deployed machine learning models are evaluated by multiple metrics beyon...
research
11/29/2021

Learning Fair Classifiers with Partially Annotated Group Labels

Recently, fairness-aware learning have become increasingly crucial, but ...

Please sign up or login with your details

Forgot password? Click here to reset