Learning Adversarially Fair and Transferable Representations

02/17/2018
by   David Madras, et al.
0

In this work, we advocate for representation learning as the key to mitigating unfair prediction outcomes downstream. We envision a scenario where learned representations may be handed off to other entities with unknown objectives. We propose and explore adversarial representation learning as a natural method of ensuring those entities will act fairly, and connect group fairness (demographic parity, equalized odds, and equal opportunity) to different adversarial objectives. Through worst-case theoretical guarantees and experimental validation, we show that the choice of this objective is crucial to fair prediction. Furthermore, we present the first in-depth experimental demonstration of fair transfer learning, by showing that our learned representations admit fair predictions on new tasks while maintaining utility, an essential goal of fair representation learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/06/2019

Flexibly Fair Representation Learning by Disentanglement

We consider the problem of learning representations that achieve group a...
research
07/31/2021

Fair Representation Learning using Interpolation Enabled Disentanglement

With the growing interest in the machine learning community to solve rea...
research
03/16/2022

Adversarial Learned Fair Representations using Dampening and Stacking

As more decisions in our daily life become automated, the need to have m...
research
06/10/2021

Fair Normalizing Flows

Fair representation learning is an attractive approach that promises fai...
research
10/30/2019

DADI: Dynamic Discovery of Fair Information with Adversarial Reinforcement Learning

We introduce a framework for dynamic adversarial discovery of informatio...
research
01/11/2021

Controllable Guarantees for Fair Outcomes via Contrastive Information Estimation

Controlling bias in training datasets is vital for ensuring equal treatm...
research
06/20/2022

Achieving Utility, Fairness, and Compactness via Tunable Information Bottleneck Measures

Designing machine learning algorithms that are accurate yet fair, not di...

Please sign up or login with your details

Forgot password? Click here to reset