A Pilot Study on Detecting Unfairness in Human Decisions With Machine Learning Algorithmic Bias Detection

12/21/2021
by   Zhe Yu, et al.
0

Fairness in decision-making has been a long-standing issue in our society. Despite the increasing number of research activities on unfairness mitigation in machine learning models, there is little research focusing on mitigating unfairness in human decisions. Fairness in human decisions is as important as, if not more important than, fairness in machine learning models since there are processes where humans make the final decisions and machine learning models can inherit bias from the human decisions they were trained on. As a result, this work aims to detect unfairness in human decisions, the very first step of solving the unfair human decision problem. This paper proposes to utilize the existing machine learning fairness detection mechanisms to detect unfairness in human decisions. The rationale behind this is, while it is difficult to directly test whether a human makes unfair decisions, with current research on machine learning fairness, it is now easy to test, on a large scale at a low cost, whether a machine learning model is unfair. By synthesizing unfair labels on four general machine learning fairness datasets and one image processing dataset, this paper shows that the proposed approach is able to detect (1) whether or not unfair labels exist in the training data and (2) the degree and direction of the unfairness. We believe that this work demonstrates the potential of utilizing machine learning fairness to detect human decision fairness. Following this work, research can be conducted on (1) preventing future unfair decisions, (2) fixing prior unfair decisions, and (3) training a fairer machine learning model.

READ FULL TEXT
research
05/21/2020

Do the Machine Learning Models on a Crowd Sourced Platform Exhibit Bias? An Empirical Study on Model Fairness

Machine learning models are increasingly being used in important decisio...
research
07/02/2018

Automated Directed Fairness Testing

Fairness is a critical trait in decision making. As machine-learning mod...
research
11/16/2018

Machine Decisions and Human Consequences

As we increasingly delegate decision-making to algorithms, whether direc...
research
12/03/2017

Always Lurking: Understanding and Mitigating Bias in Online Human Trafficking Detection

Web-based human trafficking activity has increased in recent years but i...
research
10/28/2021

On the Fairness of Machine-Assisted Human Decisions

When machine-learning algorithms are deployed in high-stakes decisions, ...
research
07/12/2020

The Impossibility Theorem of Machine Fairness – A Causal Perspective

With the increasing pervasive use of machine learning in social and econ...

Please sign up or login with your details

Forgot password? Click here to reset