Fairness in representation: quantifying stereotyping as a representational harm

01/28/2019
by   Mohsen Abbasi, et al.
8

While harms of allocation have been increasingly studied as part of the subfield of algorithmic fairness, harms of representation have received considerably less attention. In this paper, we formalize two notions of stereotyping and show how they manifest in later allocative harms within the machine learning pipeline. We also propose mitigation strategies and demonstrate their effectiveness on synthetic datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/11/2023

A Survey on Intersectional Fairness in Machine Learning: Notions, Mitigation, and Challenges

The widespread adoption of Machine Learning systems, especially in more ...
research
04/27/2023

Proportionally Representative Clustering

In recent years, there has been a surge in effort to formalize notions o...
research
02/13/2019

Mathematical Notions vs. Human Perception of Fairness: A Descriptive Approach to Fairness for Machine Learning

Fairness for Machine Learning has received considerable attention, recen...
research
05/10/2021

Who Gets What, According to Whom? An Analysis of Fairness Perceptions in Service Allocation

Algorithmic fairness research has traditionally been linked to the disci...
research
05/15/2023

Algorithmic Censoring in Dynamic Learning Systems

Dynamic learning systems subject to selective labeling exhibit censoring...
research
09/12/2022

Fairness in Forecasting of Observations of Linear Dynamical Systems

In machine learning, training data often capture the behaviour of multip...
research
07/11/2023

Towards A Scalable Solution for Improving Multi-Group Fairness in Compositional Classification

Despite the rich literature on machine learning fairness, relatively lit...

Please sign up or login with your details

Forgot password? Click here to reset