Oxford Handbook on AI Ethics Book Chapter on Race and Gender

08/08/2019
by   Timnit Gebru, et al.
0

From massive face-recognition-based surveillance and machine-learning-based decision systems predicting crime recidivism rates, to the move towards automated health diagnostic systems, artificial intelligence (AI) is being used in scenarios that have serious consequences in people's lives. However, this rapid permeation of AI into society has not been accompanied by a thorough investigation of the sociopolitical issues that cause certain groups of people to be harmed rather than advantaged by it. For instance, recent studies have shown that commercial face recognition systems have much higher error rates for dark skinned women while having minimal errors on light skinned men. A 2016 ProPublica investigation uncovered that machine learning based tools that assess crime recidivism rates in the US are biased against African Americans. Other studies show that natural language processing tools trained on newspapers exhibit societal biases (e.g. finishing the analogy "Man is to computer programmer as woman is to X" by homemaker). At the same time, books such as Weapons of Math Destruction and Automated Inequality detail how people in lower socioeconomic classes in the US are subjected to more automated decision making tools than those who are in the upper class. Thus, these tools are most often used on people towards whom they exhibit the most bias. While many technical solutions have been proposed to alleviate bias in machine learning systems, we have to take a holistic and multifaceted approach. This includes standardization bodies determining what types of systems can be used in which scenarios, making sure that automated decision tools are created by people from diverse backgrounds, and understanding the historical and political factors that disadvantage certain groups who are subjected to these tools.

READ FULL TEXT
research
04/15/2020

Bias in Multimodal AI: Testbed for Fair Automatic Recruitment

The presence of decision-making algorithms in society is rapidly increas...
research
11/17/2021

Two-Face: Adversarial Audit of Commercial Face Recognition Systems

Computer vision applications like automated face detection are used for ...
research
09/13/2021

Perceptions of Fairness and Trustworthiness Based on Explanations in Human vs. Automated Decision-Making

Automated decision systems (ADS) have become ubiquitous in many high-sta...
research
05/03/2023

Fairness in AI Systems: Mitigating gender bias from language-vision models

Our society is plagued by several biases, including racial biases, caste...
research
04/26/2023

Measuring Bias in AI Models with Application to Face Biometrics: An Statistical Approach

The new regulatory framework proposal on Artificial Intelligence (AI) pu...
research
11/18/2020

Gender Transformation: Robustness of GenderDetection in Facial Recognition Systems with variation in Image Properties

In recent times, there have been increasing accusations on artificial in...
research
12/16/2019

Algorithmic Injustices: Towards a Relational Ethics

It has become trivial to point out how decision-making processes in vari...

Please sign up or login with your details

Forgot password? Click here to reset