Do the Machine Learning Models on a Crowd Sourced Platform Exhibit Bias? An Empirical Study on Model Fairness

05/21/2020
by   Sumon Biswas, et al.
0

Machine learning models are increasingly being used in important decision-making software such as approving bank loans, recommending criminal sentencing, hiring employees, and so on. It is important to ensure the fairness of these models so that no discrimination is made between different groups in a protected attribute (e.g., race, sex, age) while decision making. Algorithms have been developed to measure unfairness and mitigate them to a certain extent. In this paper, we have focused on the empirical evaluation of fairness and mitigations on real-world machine learning models. We have created a benchmark of 40 top-rated models from Kaggle used for 5 different tasks, and then using a comprehensive set of fairness metrics evaluated their fairness. Then, we have applied 7 mitigation techniques on these models and analyzed the fairness, mitigation results, and impacts on performance. We have found that some model optimization techniques result in inducing unfairness in the models. On the other hand, although there are some fairness control mechanisms in machine learning libraries, they are not documented. The mitigation algorithm also exhibit common patterns such as mitigation in the post-processing is often costly (in terms of performance) and mitigation in the pre-processing stage is preferred in most cases. We have also presented different trade-off choices of fairness mitigation decisions. Our study suggests future research directions to reduce the gap between theoretical fairness aware algorithms and the software engineering methods to leverage them in practice.

READ FULL TEXT

page 7

page 8

research
07/07/2022

A Comprehensive Empirical Study of Bias Mitigation Methods for Software Fairness

Software bias is an increasingly important operational concern for softw...
research
11/04/2021

Modeling Techniques for Machine Learning Fairness: A Survey

Machine learning models are becoming pervasive in high-stakes applicatio...
research
08/23/2019

Fairness in Deep Learning: A Computational Perspective

Deep learning is increasingly being used in high-stake decision making a...
research
12/21/2021

A Pilot Study on Detecting Unfairness in Human Decisions With Machine Learning Algorithmic Bias Detection

Fairness in decision-making has been a long-standing issue in our societ...
research
07/21/2023

Towards Better Fairness-Utility Trade-off: A Comprehensive Measurement-Based Reinforcement Learning Framework

Machine learning is widely used to make decisions with societal impact s...
research
09/03/2020

FairXGBoost: Fairness-aware Classification in XGBoost

Highly regulated domains such as finance have long favoured the use of m...
research
05/31/2022

Inducing bias is simpler than you think

Machine learning may be oblivious to human bias but it is not immune to ...

Please sign up or login with your details

Forgot password? Click here to reset