Estimating Structural Disparities for Face Models

04/13/2022
by   Shervin Ardeshir, et al.
8

In machine learning, disparity metrics are often defined by measuring the difference in the performance or outcome of a model, across different sub-populations (groups) of datapoints. Thus, the inputs to disparity quantification consist of a model's predictions ŷ, the ground-truth labels for the predictions y, and group labels g for the data points. Performance of the model for each group is calculated by comparing ŷ and y for the datapoints within a specific group, and as a result, disparity of performance across the different groups can be calculated. In many real world scenarios however, group labels (g) may not be available at scale during training and validation time, or collecting them might not be feasible or desirable as they could often be sensitive information. As a result, evaluating disparity metrics across categorical groups would not be feasible. On the other hand, in many scenarios noisy groupings may be obtainable using some form of a proxy, which would allow measuring disparity metrics across sub-populations. Here we explore performing such analysis on computer vision models trained on human faces, and on tasks such as face attribute prediction and affect estimation. Our experiments indicate that embeddings resulting from an off-the-shelf face recognition model, could meaningfully serve as a proxy for such estimation.

READ FULL TEXT

page 2

page 6

page 7

page 12

page 13

research
06/08/2023

Shedding light on underrepresentation and Sampling Bias in machine learning

Accurately measuring discrimination is crucial to faithfully assessing f...
research
06/24/2022

"You Can't Fix What You Can't Measure": Privately Measuring Demographic Performance Disparities in Federated Learning

Federated learning allows many devices to collaborate in the training of...
research
06/20/2018

Fairness Without Demographics in Repeated Loss Minimization

Machine learning models (e.g., speech recognizers) are usually trained t...
research
05/02/2019

Long term impact of fair machine learning in sequential decision making: representation disparity and group retention

Machine learning models trained on data from multiple demographic groups...
research
03/04/2022

OPAL: Occlusion Pattern Aware Loss for Unsupervised Light Field Disparity Estimation

Light field disparity estimation is an essential task in computer vision...
research
06/01/2021

Fair-Net: A Network Architecture For Reducing Performance Disparity Between Identifiable Sub-Populations

In real world datasets, particular groups are under-represented, much ra...
research
02/16/2023

Towards Reliable Assessments of Demographic Disparities in Multi-Label Image Classifiers

Disaggregated performance metrics across demographic groups are a hallma...

Please sign up or login with your details

Forgot password? Click here to reset