Fairness-Aware Learning with Prejudice Free Representations

02/26/2020
by   Ramanujam Madhavan, et al.
0

Machine learning models are extensively being used to make decisions that have a significant impact on human life. These models are trained over historical data that may contain information about sensitive attributes such as race, sex, religion, etc. The presence of such sensitive attributes can impact certain population subgroups unfairly. It is straightforward to remove sensitive features from the data; however, a model could pick up prejudice from latent sensitive attributes that may exist in the training data. This has led to the growing apprehension about the fairness of the employed models. In this paper, we propose a novel algorithm that can effectively identify and treat latent discriminating features. The approach is agnostic of the learning algorithm and generalizes well for classification as well as regression tasks. It can also be used as a key aid in proving that the model is free of discrimination towards regulatory compliance if the need arises. The approach helps to collect discrimination-free features that would improve the model performance while ensuring the fairness of the model. The experimental results from our evaluations on publicly available real-world datasets show a near-ideal fairness measurement in comparison to other methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/13/2017

Two-stage Algorithm for Fairness-aware Machine Learning

Algorithmic decision making process now affects many aspects of our live...
research
02/05/2021

Removing biased data to improve fairness and accuracy

Machine learning systems are often trained using data collected from his...
research
11/12/2018

Eliminating Latent Discrimination: Train Then Mask

How can we control for latent discrimination in predictive models? How c...
research
09/06/2019

Approaching Machine Learning Fairness through Adversarial Network

Fairness is becoming a rising concern w.r.t. machine learning model perf...
research
09/02/2022

Exploiting Fairness to Enhance Sensitive Attributes Reconstruction

In recent years, a growing body of work has emerged on how to learn mach...
research
02/08/2022

PrivFair: a Library for Privacy-Preserving Fairness Auditing

Machine learning (ML) has become prominent in applications that directly...
research
09/14/2020

Active Fairness Instead of Unawareness

The possible risk that AI systems could promote discrimination by reprod...

Please sign up or login with your details

Forgot password? Click here to reset