Auditing for Diversity using Representative Examples

07/15/2021
by   Vijay Keswani, et al.
0

Assessing the diversity of a dataset of information associated with people is crucial before using such data for downstream applications. For a given dataset, this often involves computing the imbalance or disparity in the empirical marginal distribution of a protected attribute (e.g. gender, dialect, etc.). However, real-world datasets, such as images from Google Search or collections of Twitter posts, often do not have protected attributes labeled. Consequently, to derive disparity measures for such datasets, the elements need to hand-labeled or crowd-annotated, which are expensive processes. We propose a cost-effective approach to approximate the disparity of a given unlabeled dataset, with respect to a protected attribute, using a control set of labeled representative examples. Our proposed algorithm uses the pairwise similarity between elements in the dataset and elements in the control set to effectively bootstrap an approximation to the disparity of the dataset. Importantly, we show that using a control set whose size is much smaller than the size of the dataset is sufficient to achieve a small approximation error. Further, based on our theoretical framework, we also provide an algorithm to construct adaptive control sets that achieve smaller approximation errors than randomly chosen control sets. Simulations on two image datasets and one Twitter dataset demonstrate the efficacy of our approach (using random and adaptive control sets) in auditing the diversity of a wide variety of datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/06/2022

Marrying Fairness and Explainability in Supervised Learning

Machine learning algorithms that aid human decision-making may inadverte...
research
06/01/2019

Assessing Algorithmic Fairness with Unobserved Protected Class Using Data Combination

The increasing impact of algorithmic decisions on people's lives compels...
research
06/02/2023

Affinity Clustering Framework for Data Debiasing Using Pairwise Distribution Discrepancy

Group imbalance, resulting from inadequate or unrepresentative data coll...
research
07/02/2018

Debiasing representations by removing unwanted variation due to protected attributes

We propose a regression-based approach to removing implicit biases in re...
research
01/29/2019

Implicit Diversity in Image Summarization

Case studies, such as Kay et al., 2015 have shown that in image summariz...
research
03/25/2018

Diversity and Interdisciplinarity: How Can One Distinguish and Recombine Disparity, Variety, and Balance?

The dilemma which remained unsolved using Rao-Stirling diversity, namely...
research
03/05/2019

Learning a Lattice Planner Control Set for Autonomous Vehicles

In this paper, we introduce a method to compute a sparse lattice planner...

Please sign up or login with your details

Forgot password? Click here to reset