How Biased is Your Feature?: Computing Fairness Influence Functions with Global Sensitivity Analysis

06/01/2022
by   Bishwamittra Ghosh, et al.
9

Fairness in machine learning has attained significant focus due to the widespread application of machine learning in high-stake decision-making tasks. Unless regulated with a fairness objective, machine learning classifiers might demonstrate unfairness/bias towards certain demographic populations in the data. Thus, the quantification and mitigation of the bias induced by classifiers have become a central concern. In this paper, we aim to quantify the influence of different features on the bias of a classifier. To this end, we propose a framework of Fairness Influence Function (FIF), and compute it as a scaled difference of conditional variances in the prediction of the classifier. We also instantiate an algorithm, FairXplainer, that uses variance decomposition among the subset of features and a local regressor to compute FIFs accurately, while also capturing the intersectional effects of the features. Our experimental analysis validates that FairXplainer captures the influences of both individual features and higher-order feature interactions, estimates the bias more accurately than existing local explanation methods, and detects the increase/decrease in bias due to affirmative/punitive actions in the classifier.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/20/2021

Algorithmic Fairness Verification with Graphical Models

In recent years, machine learning (ML) algorithms have been deployed in ...
research
02/14/2023

When Mitigating Bias is Unfair: A Comprehensive Study on the Impact of Bias Mitigation Algorithms

Most works on the fairness of machine learning systems focus on the blin...
research
03/08/2021

Fairness seen as Global Sensitivity Analysis

Ensuring that a predictor is not biased against a sensible feature is th...
research
10/08/2020

Assessing the Fairness of Classifiers with Collider Bias

The increasing maturity of machine learning technologies and their appli...
research
06/17/2020

LimeOut: An Ensemble Approach To Improve Process Fairness

Artificial Intelligence and Machine Learning are becoming increasingly p...
research
03/03/2023

Model Explanation Disparities as a Fairness Diagnostic

In recent years, there has been a flurry of research focusing on the fai...
research
10/19/2022

Towards Procedural Fairness: Uncovering Biases in How a Toxic Language Classifier Uses Sentiment Information

Previous works on the fairness of toxic language classifiers compare the...

Please sign up or login with your details

Forgot password? Click here to reset