DualFair: Fair Representation Learning at Both Group and Individual Levels via Contrastive Self-supervision

03/15/2023
by   Sungwon Han, et al.
0

Algorithmic fairness has become an important machine learning problem, especially for mission-critical Web applications. This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations. Unlike existing models that target a single type of fairness, our model jointly optimizes for two fairness criteria - group fairness and counterfactual fairness - and hence makes fairer predictions at both the group and individual levels. Our model uses contrastive loss to generate embeddings that are indistinguishable for each protected group, while forcing the embeddings of counterfactual pairs to be similar. It then uses a self-knowledge distillation method to maintain the quality of representation for the downstream tasks. Extensive analysis over multiple datasets confirms the model's validity and further shows the synergy of jointly addressing two fairness criteria, suggesting the model's potential value in fair intelligent Web applications.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/10/2022

Learning Fair Node Representations with Graph Counterfactual Fairness

Fair machine learning aims to mitigate the biases of model predictions a...
research
10/13/2020

On the Fairness of Causal Algorithmic Recourse

While many recent works have studied the problem of algorithmic fairness...
research
08/30/2020

Adversarial Learning for Counterfactual Fairness

In recent years, fairness has become an important topic in the machine l...
research
06/15/2021

Contrastive Mixture of Posteriors for Counterfactual Inference, Data Integration and Fairness

Learning meaningful representations of data that can address challenges ...
research
06/04/2018

iFair: Learning Individually Fair Data Representations for Algorithmic Decision Making

People are rated and ranked, towards algorithmic decision making in an i...
research
08/21/2020

Beyond Individual and Group Fairness

We present a new data-driven model of fairness that, unlike existing sta...
research
10/13/2022

Walk a Mile in Their Shoes: a New Fairness Criterion for Machine Learning

The old empathetic adage, “Walk a mile in their shoes,” asks that one im...

Please sign up or login with your details

Forgot password? Click here to reset