Social Norm Bias: Residual Harms of Fairness-Aware Algorithms

08/25/2021
by   Myra Cheng, et al.
1

Many modern learning algorithms mitigate bias by enforcing fairness across coarsely-defined groups related to a sensitive attribute like gender or race. However, the same algorithms seldom account for the within-group biases that arise due to the heterogeneity of group members. In this work, we characterize Social Norm Bias (SNoB), a subtle but consequential type of discrimination that may be exhibited by automated decision-making systems, even when these systems achieve group fairness objectives. We study this issue through the lens of gender bias in occupation classification from biographies. We quantify SNoB by measuring how an algorithm's predictions are associated with conformity to gender norms, which is measured using a machine learning approach. This framework reveals that for classification tasks related to male-dominated occupations, fairness-aware classifiers favor biographies written in ways that align with masculine gender norms. We compare SNoB across fairness intervention techniques and show that post-processing interventions do not mitigate this type of bias at all.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/03/2020

FAE: A Fairness-Aware Ensemble Framework

Automated decision making based on big data and machine learning (ML) al...
research
02/22/2023

Uncovering Bias in Face Generation Models

Recent advancements in GANs and diffusion models have enabled the creati...
research
10/04/2020

Fairness in Machine Learning: A Survey

As Machine Learning technologies become increasingly used in contexts th...
research
08/13/2023

Benign Shortcut for Debiasing: Fair Visual Recognition via Intervention with Shortcut Features

Machine learning models often learn to make predictions that rely on sen...
research
08/22/2022

Minimax AUC Fairness: Efficient Algorithm with Provable Convergence

The use of machine learning models in consequential decision making ofte...
research
06/25/2015

Fairness-Aware Learning with Restriction of Universal Dependency using f-Divergences

Fairness-aware learning is a novel framework for classification tasks. L...
research
10/11/2022

Social-Group-Agnostic Word Embedding Debiasing via the Stereotype Content Model

Existing word embedding debiasing methods require social-group-specific ...

Please sign up or login with your details

Forgot password? Click here to reset