Fairness Indicators for Systematic Assessments of Visual Feature Extractors

02/15/2022
by   Priya Goyal, et al.
0

Does everyone equally benefit from computer vision systems? Answers to this question become more and more important as computer vision systems are deployed at large scale, and can spark major concerns when they exhibit vast performance discrepancies between people from various demographic and social backgrounds. Systematic diagnosis of fairness, harms, and biases of computer vision systems is an important step towards building socially responsible systems. To initiate an effort towards standardized fairness audits, we propose three fairness indicators, which aim at quantifying harms and biases of visual systems. Our indicators use existing publicly available datasets collected for fairness evaluations, and focus on three main types of harms and bias identified in the literature, namely harmful label associations, disparity in learned representations of social and demographic traits, and biased performance on geographically diverse images from across the world.We define precise experimental protocols applicable to a wide range of computer vision models. These indicators are part of an ever-evolving suite of fairness probes and are not intended to be a substitute for a thorough analysis of the broader impact of the new computer vision technologies. Yet, we believe it is a necessary first step towards (1) facilitating the widespread adoption and mandate of the fairness assessments in computer vision research, and (2) tracking progress towards building socially responsible models. To study the practical effectiveness and broad applicability of our proposed indicators to any visual system, we apply them to off-the-shelf models built using widely adopted model training paradigms which vary in their ability to whether they can predict labels on a given image or only produce the embeddings. We also systematically study the effect of data domain and model size.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/21/2020

Gender Slopes: Counterfactual Fairness for Computer Vision Models by Attribute Manipulation

Automated computer vision systems have been applied in many domains incl...
research
08/11/2023

DIG In: Evaluating Disparities in Image Generations with Indicators for Geographic Diversity

The unprecedented photorealistic results achieved by recent text-to-imag...
research
08/31/2023

FACET: Fairness in Computer Vision Evaluation Benchmark

Computer vision models have known performance disparities across attribu...
research
05/09/2020

Cyberbullying Detection with Fairness Constraints

Cyberbullying is a widespread adverse phenomenon among online social int...
research
02/16/2023

Towards Reliable Assessments of Demographic Disparities in Multi-Label Image Classifiers

Disaggregated performance metrics across demographic groups are a hallma...
research
08/05/2021

Evaluating CLIP: Towards Characterization of Broader Capabilities and Downstream Implications

Recently, there have been breakthroughs in computer vision ("CV") models...
research
03/25/2023

Fairness meets Cross-Domain Learning: a new perspective on Models and Metrics

Deep learning-based recognition systems are deployed at scale for severa...

Please sign up or login with your details

Forgot password? Click here to reset