Feature and Label Embedding Spaces Matter in Addressing Image Classifier Bias

10/27/2021
by   William Thong, et al.
17

This paper strives to address image classifier bias, with a focus on both feature and label embedding spaces. Previous works have shown that spurious correlations from protected attributes, such as age, gender, or skin tone, can cause adverse decisions. To balance potential harms, there is a growing need to identify and mitigate image classifier bias. First, we identify in the feature space a bias direction. We compute class prototypes of each protected attribute value for every class, and reveal an existing subspace that captures the maximum variance of the bias. Second, we mitigate biases by mapping image inputs to label embedding spaces. Each value of the protected attribute has its projection head where classes are embedded through a latent vector representation rather than a common one-hot encoding. Once trained, we further reduce in the feature space the bias effect by removing its direction. Evaluation on biased image datasets, for multi-class, multi-label and binary classifications, shows the effectiveness of tackling both feature and label embedding spaces in improving the fairness of the classifier predictions, while preserving classification performance.

READ FULL TEXT

page 1

page 15

research
10/25/2022

TabMixer: Excavating Label Distribution Learning with Small-scale Features

Label distribution learning (LDL) differs from multi-label learning whic...
research
07/20/2022

Discover and Mitigate Unknown Biases with Debiasing Alternate Networks

Deep image classifiers have been found to learn biases from datasets. To...
research
03/15/2022

Distraction is All You Need for Fairness

With the recent growth in artificial intelligence models and its expandi...
research
04/16/2020

Null It Out: Guarding Protected Attributes by Iterative Nullspace Projection

The ability to control for the kinds of information encoded in neural re...
research
04/26/2020

Is Your Classifier Actually Biased? Measuring Fairness under Uncertainty with Bernstein Bounds

Most NLP datasets are not annotated with protected attributes such as ge...
research
09/22/2021

Contrastive Learning for Fair Representations

Trained classification models can unintentionally lead to biased represe...
research
03/03/2023

Model Explanation Disparities as a Fairness Diagnostic

In recent years, there has been a flurry of research focusing on the fai...

Please sign up or login with your details

Forgot password? Click here to reset