Developing a novel fair-loan-predictor through a multi-sensitive debiasing pipeline: DualFair

10/17/2021
by   Jashandeep Singh, et al.
0

Machine learning (ML) models are increasingly used for high-stake applications that can greatly impact people's lives. Despite their use, these models have the potential to be biased towards certain social groups on the basis of race, gender, or ethnicity. Many prior works have attempted to mitigate this "model discrimination" by updating the training data (pre-processing), altering the model learning process (in-processing), or manipulating model output (post-processing). However, these works have not yet been extended to the realm of multi-sensitive parameters and sensitive options (MSPSO), where sensitive parameters are attributes that can be discriminated against (e.g race) and sensitive options are options within sensitive parameters (e.g black or white), thus giving them limited real-world usability. Prior work in fairness has also suffered from an accuracy-fairness tradeoff that prevents both the accuracy and fairness from being high. Moreover, previous literature has failed to provide holistic fairness metrics that work with MSPSO. In this paper, we solve all three of these problems by (a) creating a novel bias mitigation technique called DualFair and (b) developing a new fairness metric (i.e. AWI) that can handle MSPSO. Lastly, we test our novel mitigation method using a comprehensive U.S mortgage lending dataset and show that our classifier, or fair loan predictor, obtains better fairness and accuracy metrics than current state-of-the-art models.

READ FULL TEXT
research
07/14/2022

Bias Mitigation for Machine Learning Classifiers: A Comprehensive Survey

This paper provides a comprehensive survey of bias mitigation methods fo...
research
11/04/2019

Auditing and Achieving Intersectional Fairness in Classification Problems

Machine learning algorithms are extensively used to make increasingly mo...
research
06/07/2023

M^3Fair: Mitigating Bias in Healthcare Data through Multi-Level and Multi-Sensitive-Attribute Reweighting Method

In the data-driven artificial intelligence paradigm, models heavily rely...
research
02/26/2020

DeBayes: a Bayesian method for debiasing network embeddings

As machine learning algorithms are increasingly deployed for high-impact...
research
05/31/2018

Multiaccuracy: Black-Box Post-Processing for Fairness in Classification

Machine learning predictors are successfully deployed in applications ra...
research
07/06/2020

Making Fair ML Software using Trustworthy Explanation

Machine learning software is being used in many applications (finance, h...
research
10/16/2017

Fair Kernel Learning

New social and economic activities massively exploit big data and machin...

Please sign up or login with your details

Forgot password? Click here to reset