Directional Bias Amplification

02/24/2021
by   Angelina Wang, et al.
0

Mitigating bias in machine learning systems requires refining our understanding of bias propagation pathways: from societal structures to large-scale data to trained models to impact on society. In this work, we focus on one aspect of the problem, namely bias amplification: the tendency of models to amplify the biases present in the data they are trained on. A metric for measuring bias amplification was introduced in the seminal work by Zhao et al. (2017); however, as we demonstrate, this metric suffers from a number of shortcomings including conflating different types of bias amplification and failing to account for varying base rates of protected classes. We introduce and analyze a new, decoupled metric for measuring bias amplification, BiasAmp_→ (Directional Bias Amplification). We thoroughly analyze and discuss both the technical assumptions and the normative implications of this metric. We provide suggestions about its measurement by cautioning against predicting sensitive attributes, encouraging the use of confidence intervals due to fluctuations in the fairness of models across runs, and discussing the limitations of what this metric captures. Throughout this paper, we work to provide an interrogative look at the technical measurement of bias amplification, guided by our normative ideas of what we want it to encompass.

READ FULL TEXT
research
11/06/2020

Wasserstein-based fairness interpretability framework for machine learning models

In this article, we introduce a fairness interpretability framework for ...
research
03/05/2019

Limitations of Pinned AUC for Measuring Unintended Bias

This report examines the Pinned AUC metric introduced and highlights som...
research
04/20/2023

On the Independence of Association Bias and Empirical Fairness in Language Models

The societal impact of pre-trained language models has prompted research...
research
12/26/2022

Bias Mitigation Framework for Intersectional Subgroups in Neural Networks

We propose a fairness-aware learning framework that mitigates intersecti...
research
06/08/2020

Iterative Effect-Size Bias in Ridehailing: Measuring Social Bias in Dynamic Pricing of 100 Million Rides

Algorithmic bias is the systematic preferential or discriminatory treatm...
research
08/14/2019

Debiasing Personal Identities in Toxicity Classification

As Machine Learning models continue to be relied upon for making automat...
research
05/02/2023

Taxonomizing and Measuring Representational Harms: A Look at Image Tagging

In this paper, we examine computational approaches for measuring the "fa...

Please sign up or login with your details

Forgot password? Click here to reset