Racial categories in machine learning

11/28/2018
by   Sebastian Benthall, et al.
0

Controversies around race and machine learning have sparked debate among computer scientists over how to design machine learning systems that guarantee fairness. These debates rarely engage with how racial identity is embedded in our social experience, making for sociological and psychological complexity. This complexity challenges the paradigm of considering fairness to be a formal property of supervised learning with respect to protected personal attributes. Racial identity is not simply a personal subjective quality. For people labeled "Black" it is an ascribed political category that has consequences for social differentiation embedded in systemic patterns of social inequality achieved through both social and spatial segregation. In the United States, racial classification can best be understood as a system of inherently unequal status categories that places whites as the most privileged category while signifying the Negro/black category as stigmatized. Social stigma is reinforced through the unequal distribution of societal rewards and goods along racial lines that is reinforced by state, corporate, and civic institutions and practices. This creates a dilemma for society and designers: be blind to racial group disparities and thereby reify racialized social inequality by no longer measuring systemic inequality, or be conscious of racial categories in a way that itself reifies race. We propose a third option. By preceding group fairness interventions with unsupervised learning to dynamically detect patterns of segregation, machine learning systems can mitigate the root cause of social disparities, social segregation and stratification, without further anchoring status categories of disadvantage.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/08/2019

Towards a Critical Race Methodology in Algorithmic Fairness

We examine the way race and racial categories are adopted in algorithmic...
research
02/09/2021

The Use and Misuse of Counterfactuals in Ethical Machine Learning

The use of counterfactuals for considerations of algorithmic fairness an...
research
05/19/2023

Group fairness without demographics using social networks

Group fairness is a popular approach to prevent unfavorable treatment of...
research
01/05/2021

Characterizing Intersectional Group Fairness with Worst-Case Comparisons

Machine Learning or Artificial Intelligence algorithms have gained consi...
research
03/22/2019

The invisible power of fairness. How machine learning shapes democracy

Many machine learning systems make extensive use of large amounts of dat...
research
05/27/2022

Subverting machines, fluctuating identities: Re-learning human categorization

Most machine learning systems that interact with humans construct some n...
research
12/08/2020

Fairness Preferences, Actual and Hypothetical: A Study of Crowdworker Incentives

How should we decide which fairness criteria or definitions to adopt in ...

Please sign up or login with your details

Forgot password? Click here to reset