Log In Sign Up

Learning Fair and Transferable Representations

by   Luca Oneto, et al.

Developing learning methods which do not discriminate subgroups in the population is a central goal of algorithmic fairness. One way to reach this goal is by modifying the data representation in order to meet certain fairness constraints. In this work we measure fairness according to demographic parity. This requires the probability of the possible model decisions to be independent of the sensitive information. We argue that the goal of imposing demographic parity can be substantially facilitated within a multitask learning setting. We leverage task similarities by encouraging a shared fair representation across the tasks via low rank matrix factorization. We derive learning bounds establishing that the learned representation transfers well to novel tasks both in terms of prediction performance and fairness metrics. We present experiments on three real world datasets, showing that the proposed method outperforms state-of-the-art approaches by a significant margin.


Renyi Fair Information Bottleneck for Image Classification

We develop a novel method for ensuring fairness in machine learning whic...

Learning Fair Representations for Kernel Models

Fair representations are a powerful tool for establishing criteria like ...

Achieving Utility, Fairness, and Compactness via Tunable Information Bottleneck Measures

Designing machine learning algorithms that are accurate yet fair, not di...

Impossibility results for fair representations

With the growing awareness to fairness in machine learning and the reali...

Sustaining Fairness via Incremental Learning

Machine learning systems are often deployed for making critical decision...

Metrizing Fairness

We study supervised learning problems for predicting properties of indiv...

DADI: Dynamic Discovery of Fair Information with Adversarial Reinforcement Learning

We introduce a framework for dynamic adversarial discovery of informatio...