Is Fairness Only Metric Deep? Evaluating and Addressing Subgroup Gaps in Deep Metric Learning

03/23/2022
by   Natalie Dullerud, et al.
0

Deep metric learning (DML) enables learning with less supervision through its emphasis on the similarity structure of representations. There has been much work on improving generalization of DML in settings like zero-shot retrieval, but little is known about its implications for fairness. In this paper, we are the first to evaluate state-of-the-art DML methods trained on imbalanced data, and to show the negative impact these representations have on minority subgroup performance when used for downstream tasks. In this work, we first define fairness in DML through an analysis of three properties of the representation space – inter-class alignment, intra-class alignment, and uniformity – and propose finDML, the fairness in non-balanced DML benchmark to characterize representation fairness. Utilizing finDML, we find bias in DML representations to propagate to common downstream classification tasks. Surprisingly, this bias is propagated even when training data in the downstream task is re-balanced. To address this problem, we present Partial Attribute De-correlation (PARADE) to de-correlate feature representations from sensitive attributes and reduce performance gaps between subgroups in both embedding space and downstream metrics.

READ FULL TEXT
research
11/28/2022

Metric Learning as a Service with Covariance Embedding

With the emergence of deep learning, metric learning has gained signific...
research
04/21/2023

Deep Metric Learning Assisted by Intra-variance in A Semi-supervised View of Learning

Deep metric learning aims to construct an embedding space where samples ...
research
10/26/2022

FairCLIP: Social Bias Elimination based on Attribute Prototype Learning and Representation Neutralization

The Vision-Language Pre-training (VLP) models like CLIP have gained popu...
research
07/24/2020

Transferred Discrepancy: Quantifying the Difference Between Representations

Understanding what information neural networks capture is an essential p...
research
10/22/2019

An Empirical Study on Learning Fairness Metrics for COMPAS Data with Human Supervision

The notion of individual fairness requires that similar people receive s...
research
05/31/2019

On the Fairness of Disentangled Representations

Recently there has been a significant interest in learning disentangled ...
research
02/08/2023

Mitigating Bias in Visual Transformers via Targeted Alignment

As transformer architectures become increasingly prevalent in computer v...

Please sign up or login with your details

Forgot password? Click here to reset