README: REpresentation learning by fairness-Aware Disentangling MEthod

07/07/2020
by   Sungho Park, et al.
0

Fair representation learning aims to encode invariant representation with respect to the protected attribute, such as gender or age. In this paper, we design Fairness-aware Disentangling Variational AutoEncoder (FD-VAE) for fair representation learning. This network disentangles latent space into three subspaces with a decorrelation loss that encourages each subspace to contain independent information: 1) target attribute information, 2) protected attribute information, 3) mutual attribute information. After the representation learning, this disentangled representation is leveraged for fairer downstream classification by excluding the subspace with the protected attribute information. We demonstrate the effectiveness of our model through extensive experiments on CelebA and UTK Face datasets. Our method outperforms the previous state-of-the-art method by large margins in terms of equal opportunity and equalized odds.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/07/2019

Effectiveness of Equalized Odds for Fair Classification under Imperfect Group Information

Most approaches for ensuring or improving a model's fairness with respec...
research
04/27/2023

FLAC: Fairness-Aware Representation Learning by Suppressing Attribute-Class Associations

Bias in computer vision systems can perpetuate or even amplify discrimin...
research
08/04/2022

Invariant Representations with Stochastically Quantized Neural Networks

Representation learning algorithms offer the opportunity to learn invari...
research
01/21/2021

Blocked and Hierarchical Disentangled Representation From Information Theory Perspective

We propose a novel and theoretical model, blocked and hierarchical varia...
research
04/05/2020

FairNN- Conjoint Learning of Fair Representations for Fair Decisions

In this paper, we propose FairNN a neural network that performs joint fe...
research
06/20/2022

Achieving Utility, Fairness, and Compactness via Tunable Information Bottleneck Measures

Designing machine learning algorithms that are accurate yet fair, not di...
research
04/17/2022

Fair Classification under Covariate Shift and Missing Protected Attribute – an Investigation using Related Features

This study investigated the problem of fair classification under Covaria...

Please sign up or login with your details

Forgot password? Click here to reset