Achieving Utility, Fairness, and Compactness via Tunable Information Bottleneck Measures

06/20/2022
by   Adam Gronowski, et al.
24

Designing machine learning algorithms that are accurate yet fair, not discriminating based on any sensitive attribute, is of paramount importance for society to accept AI for critical applications. In this article, we propose a novel fair representation learning method termed the Rényi Fair Information Bottleneck Method (RFIB) which incorporates constraints for utility, fairness, and compactness of representation, and apply it to image classification. A key attribute of our approach is that we consider - in contrast to most prior work - both demographic parity and equalized odds as fairness constraints, allowing for a more nuanced satisfaction of both criteria. Leveraging a variational approach, we show that our objectives yield a loss function involving classical Information Bottleneck (IB) measures and establish an upper bound in terms of the Rényi divergence of order α on the mutual information IB term measuring compactness between the input and its encoded embedding. Experimenting on three different image datasets (EyePACS, CelebA, and FairFace), we study the influence of the α parameter as well as two other tunable IB parameters on achieving utility/fairness trade-off goals, and show that the α parameter gives an additional degree of freedom that can be used to control the compactness of the representation. We evaluate the performance of our method using various utility, fairness, and compound utility/fairness metrics, showing that RFIB outperforms current state-of-the-art approaches.

READ FULL TEXT

page 1

page 7

page 11

research
03/09/2022

Renyi Fair Information Bottleneck for Image Classification

We develop a novel method for ensuring fairness in machine learning whic...
research
06/25/2019

Learning Fair and Transferable Representations

Developing learning methods which do not discriminate subgroups in the p...
research
06/11/2020

A Variational Approach to Privacy and Fairness

In this article, we propose a new variational approach to learn private ...
research
07/07/2020

README: REpresentation learning by fairness-Aware Disentangling MEthod

Fair representation learning aims to encode invariant representation wit...
research
02/17/2018

Learning Adversarially Fair and Transferable Representations

In this work, we advocate for representation learning as the key to miti...
research
03/15/2023

FairAdaBN: Mitigating unfairness with adaptive batch normalization and its application to dermatological disease classification

Deep learning is becoming increasingly ubiquitous in medical research an...
research
01/27/2021

A Balance for Fairness: Fair Distribution Utilising Physics in Games of Characteristic Function Form

In chaotic modern society, there is an increasing demand for the realiza...

Please sign up or login with your details

Forgot password? Click here to reset