Null-sampling for Interpretable and Fair Representations

08/12/2020
by   Thomas Kehrenberg, et al.
0

We propose to learn invariant representations, in the data domain, to achieve interpretability in algorithmic fairness. Invariance implies a selectivity for high level, relevant correlations w.r.t. class label annotations, and a robustness to irrelevant correlations with protected characteristics such as race or gender. We introduce a non-trivial setup in which the training set exhibits a strong bias such that class label annotations are irrelevant and spurious correlations cannot be distinguished. To address this problem, we introduce an adversarially trained model with a null-sampling procedure to produce invariant representations in the data domain. To enable disentanglement, a partially-labelled representative set is used. By placing the representations into the data domain, the changes made by the model are easily examinable by human auditors. We show the effectiveness of our method on both image and tabular datasets: Coloured MNIST, the CelebA and the Adult dataset.

READ FULL TEXT

page 4

page 5

page 24

page 25

page 27

research
03/24/2022

Addressing Missing Sources with Adversarial Support-Matching

When trained on diverse labeled data, machine learning models have prove...
research
10/15/2018

Neural Styling for Interpretable Fair Representations

We observe a rapid increase in machine learning models for learning data...
research
04/27/2023

FLAC: Fairness-Aware Representation Learning by Suppressing Attribute-Class Associations

Bias in computer vision systems can perpetuate or even amplify discrimin...
research
11/16/2019

Towards Reducing Bias in Gender Classification

Societal bias towards certain communities is a big problem that affects ...
research
11/15/2020

Debiasing Convolutional Neural Networks via Meta Orthogonalization

While deep learning models often achieve strong task performance, their ...
research
05/29/2020

Overview of Scanner Invariant Representations

Pooled imaging data from multiple sources is subject to bias from each s...
research
09/21/2023

Environment-biased Feature Ranking for Novelty Detection Robustness

We tackle the problem of robust novelty detection, where we aim to detec...

Please sign up or login with your details

Forgot password? Click here to reset