FARE: Provably Fair Representation Learning

10/13/2022
by   Nikola Jovanović, et al.
0

Fair representation learning (FRL) is a popular class of methods aiming to produce fair classifiers via data preprocessing. However, recent work has shown that prior methods achieve worse accuracy-fairness tradeoffs than originally suggested by their results. This dictates the need for FRL methods that provide provable upper bounds on unfairness of any downstream classifier, a challenge yet unsolved. In this work we address this challenge and propose Fairness with Restricted Encoders (FARE), the first FRL method with provable fairness guarantees. Our key insight is that restricting the representation space of the encoder enables us to derive suitable fairness guarantees, while allowing empirical accuracy-fairness tradeoffs comparable to prior work. FARE instantiates this idea with a tree-based encoder, a choice motivated by inherent advantages of decision trees when applied in our setting. Crucially, we develop and apply a practical statistical procedure that computes a high-confidence upper bound on the unfairness of any downstream classifier. In our experimental evaluation on several datasets and settings we demonstrate that FARE produces tight upper bounds, often comparable with empirical results of prior methods, which establishes the practical value of our approach.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/27/2021

Adversarial Stacked Auto-Encoders for Fair Representation Learning

Training machine learning models with the only accuracy as a final goal ...
research
04/12/2022

Breaking Fair Binary Classification with Optimal Flipping Attacks

Minimizing risk with fairness constraints is one of the popular approach...
research
06/10/2021

Fair Normalizing Flows

Fair representation learning is an attractive approach that promises fai...
research
05/31/2021

Rawlsian Fair Adaptation of Deep Learning Classifiers

Group-fairness in classification aims for equality of a predictive utili...
research
11/26/2021

Latent Space Smoothing for Individually Fair Representations

Fair representation learning encodes user data to ensure fairness and ut...
research
06/15/2018

Classification with Fairness Constraints: A Meta-Algorithm with Provable Guarantees

Developing classification algorithms that are fair with respect to sensi...
research
12/01/2020

Data Preprocessing to Mitigate Bias with Boosted Fair Mollifiers

In a recent paper, Celis et al. (2020) introduced a new approach to fair...

Please sign up or login with your details

Forgot password? Click here to reset