Affinity Clustering Framework for Data Debiasing Using Pairwise Distribution Discrepancy

06/02/2023
by   Siamak Ghodsi, et al.
0

Group imbalance, resulting from inadequate or unrepresentative data collection methods, is a primary cause of representation bias in datasets. Representation bias can exist with respect to different groups of one or more protected attributes and might lead to prejudicial and discriminatory outcomes toward certain groups of individuals; in cases where a learning model is trained on such biased data. This paper presents MASC, a data augmentation approach that leverages affinity clustering to balance the representation of non-protected and protected groups of a target dataset by utilizing instances of the same protected attributes from similar datasets that are categorized in the same cluster as the target dataset by sharing instances of the protected attribute. The proposed method involves constructing an affinity matrix by quantifying distribution discrepancies between dataset pairs and transforming them into a symmetric pairwise similarity matrix. A non-parametric spectral clustering is then applied to this affinity matrix, automatically categorizing the datasets into an optimal number of clusters. We perform a step-by-step experiment as a demo of our method to show the procedure of the proposed data augmentation method and evaluate and discuss its performance. A comparison with other data augmentation methods, both pre- and post-augmentation, is conducted, along with a model evaluation analysis of each method. Our method can handle non-binary protected attributes so, in our experiments, bias is measured in a non-binary protected attribute setup w.r.t. racial groups distribution for two separate minority groups in comparison with the majority group before and after debiasing. Empirical results imply that our method of augmenting dataset biases using real (genuine) data from similar contexts can effectively debias the target datasets comparably to existing data augmentation strategies.

READ FULL TEXT

page 11

page 12

page 14

research
10/03/2021

xFAIR: Better Fairness via Model-based Rebalancing of Protected Attributes

Machine learning software can generate models that inappropriately discr...
research
09/13/2023

Data Augmentation via Subgroup Mixup for Improving Fairness

In this work, we propose data augmentation via pairwise mixup across sub...
research
11/30/2020

Doubly Stochastic Subspace Clustering

Many state-of-the-art subspace clustering methods follow a two-step proc...
research
07/15/2021

Auditing for Diversity using Representative Examples

Assessing the diversity of a dataset of information associated with peop...
research
09/21/2021

Evaluating Debiasing Techniques for Intersectional Biases

Bias is pervasive in NLP models, motivating the development of automatic...
research
11/04/2022

Decorrelation with conditional normalizing flows

The sensitivity of many physics analyses can be enhanced by constructing...
research
04/06/2023

Data AUDIT: Identifying Attribute Utility- and Detectability-Induced Bias in Task Models

To safely deploy deep learning-based computer vision models for computer...

Please sign up or login with your details

Forgot password? Click here to reset