Demographic-Reliant Algorithmic Fairness: Characterizing the Risks of Demographic Data Collection in the Pursuit of Fairness

04/18/2022
by   McKane Andrus, et al.
0

Most proposed algorithmic fairness techniques require access to data on a "sensitive attribute" or "protected category" (such as race, ethnicity, gender, or sexuality) in order to make performance comparisons and standardizations across groups, however this data is largely unavailable in practice, hindering the widespread adoption of algorithmic fairness. Through this paper, we consider calls to collect more data on demographics to enable algorithmic fairness and challenge the notion that discrimination can be overcome with smart enough technical methods and sufficient data alone. We show how these techniques largely ignore broader questions of data governance and systemic oppression when categorizing individuals for the purpose of fairer algorithmic processing. In this work, we explore under what conditions demographic data should be collected and used to enable algorithmic fairness methods by characterizing a range of social risks to individuals and communities. For the risks to individuals we consider the unique privacy risks associated with the sharing of sensitive attributes likely to be the target of fairness analysis, the possible harms stemming from miscategorizing and misrepresenting individuals in the data collection process, and the use of sensitive data beyond data subjects' expectations. Looking more broadly, the risks to entire groups and communities include the expansion of surveillance infrastructure in the name of fairness, misrepresenting and mischaracterizing what it means to be part of a demographic group or to hold a certain identity, and ceding the ability to define for themselves what constitutes biased or unfair treatment. We argue that, by confronting these questions before and during the collection of demographic data, algorithmic fairness methods are more likely to actually mitigate harmful treatment disparities without reinforcing systems of oppression.

READ FULL TEXT
research
10/30/2020

"What We Can't Measure, We Can't Understand": Challenges to Demographic Data Procurement in the Pursuit of Fairness

As calls for fair and unbiased algorithmic systems increase, so too does...
research
10/14/2020

Causal Multi-Level Fairness

Algorithmic systems are known to impact marginalized groups severely, an...
research
11/13/2020

An example of prediction which complies with Demographic Parity and equalizes group-wise risks in the context of regression

Let (X, S, Y) ∈ℝ^p ×{1, 2}×ℝ be a triplet following some joint distribut...
research
06/11/2020

Adaptive Sampling to Reduce Disparate Performance

Existing methods for reducing disparate performance of a classifier acro...
research
02/03/2021

Fairness for Unobserved Characteristics: Insights from Technological Impacts on Queer Communities

Advances in algorithmic fairness have largely omitted sexual orientation...
research
12/25/2017

Demographics and discussion influence views on algorithmic fairness

The field of algorithmic fairness has highlighted ethical questions whic...
research
08/16/2022

Ex-Ante Assessment of Discrimination in Dataset

Data owners face increasing liability for how the use of their data coul...

Please sign up or login with your details

Forgot password? Click here to reset