Towards causal benchmarking of bias in face analysis algorithms

07/13/2020
by   Guha Balakrishnan, et al.
0

Measuring algorithmic bias is crucial both to assess algorithmic fairness, and to guide the improvement of algorithms. Current methods to measure algorithmic bias in computer vision, which are based on observational datasets, are inadequate for this task because they conflate algorithmic bias with dataset bias. To address this problem we develop an experimental method for measuring algorithmic bias of face analysis algorithms, which manipulates directly the attributes of interest, e.g., gender and skin tone, in order to reveal causal links between attribute variation and performance change. Our proposed method is based on generating synthetic “transects” of matched sample images that are designed to differ along specific attributes while leaving other attributes constant. A crucial aspect of our approach is relying on the perception of human observers, both to guide manipulations, and to measure algorithmic bias. Besides allowing the measurement of algorithmic bias, synthetic transects have other advantages with respect to observational datasets: they sample attributes more evenly allowing for more straightforward bias analysis on minority and intersectional groups, they enable prediction of bias in new scenarios, they greatly reduce ethical and legal challenges, and they are economical and fast to obtain, helping make bias testing affordable and widely available. We validate our method by comparing it to a study that employs the traditional observational method for analyzing bias in gender classification algorithms. The two methods reach different conclusions. While the observational method reports gender and skin color biases, the experimental method reveals biases due to gender, hair length, age, and facial hair.

READ FULL TEXT

page 7

page 8

page 11

page 14

page 16

page 17

page 19

page 26

research
08/10/2023

Benchmarking Algorithmic Bias in Face Recognition: An Experimental Approach Using Synthetic Faces and Human Evaluation

We propose an experimental method for measuring bias in face recognition...
research
10/15/2021

Comparing Human and Machine Bias in Face Recognition

Much recent research has uncovered and discussed serious concerns of bia...
research
04/06/2023

Data AUDIT: Identifying Attribute Utility- and Detectability-Induced Bias in Task Models

To safely deploy deep learning-based computer vision models for computer...
research
02/07/2023

Towards causally linking architectural parametrizations to algorithmic bias in neural networks

Training dataset biases are by far the most scrutinized factors when exp...
research
01/10/2022

Information-Theoretic Bias Reduction via Causal View of Spurious Correlation

We propose an information-theoretic bias measurement technique through a...
research
03/25/2021

Equality before the Law: Legal Judgment Consistency Analysis for Fairness

In a legal system, judgment consistency is regarded as one of the most i...
research
09/10/2018

Assessing and Addressing Algorithmic Bias - But Before We Get There

Algorithmic and data bias are gaining attention as a pressing issue in p...

Please sign up or login with your details

Forgot password? Click here to reset