Fairness-Aware Naive Bayes Classifier for Data with Multiple Sensitive Features

Fairness-aware machine learning seeks to maximise utility in generating predictions while avoiding unfair discrimination based on sensitive attributes such as race, sex, religion, etc. An important line of work in this field is enforcing fairness during the training step of a classifier. A simple yet effective binary classification algorithm that follows this strategy is two-naive-Bayes (2NB), which enforces statistical parity - requiring that the groups comprising the dataset receive positive labels with the same likelihood. In this paper, we generalise this algorithm into N-naive-Bayes (NNB) to eliminate the simplification of assuming only two sensitive groups in the data and instead apply it to an arbitrary number of groups. We propose an extension of the original algorithm's statistical parity constraint and the post-processing routine that enforces statistical independence of the label and the single sensitive attribute. Then, we investigate its application on data with multiple sensitive features and propose a new constraint and post-processing routine to enforce differential fairness, an extension of established group-fairness constraints focused on intersectionalities. We empirically demonstrate the effectiveness of the NNB algorithm on US Census datasets and compare its accuracy and debiasing performance, as measured by disparate impact and DF-ϵ score, with similar group-fairness algorithms. Finally, we lay out important considerations users should be aware of before incorporating this algorithm into their application, and direct them to further reading on the pros, cons, and ethical implications of using statistical parity as a fairness criterion.

READ FULL TEXT

page 6

page 7

research
07/27/2021

Statistical Guarantees for Fairness Aware Plug-In Algorithms

A plug-in algorithm to estimate Bayes Optimal Classifiers for fairness-a...
research
10/26/2020

One-vs.-One Mitigation of Intersectional Bias: A General Method to Extend Fairness-Aware Binary Classification

With the widespread adoption of machine learning in the real world, the ...
research
02/03/2023

An Operational Perspective to Fairness Interventions: Where and How to Intervene

As AI-based decision systems proliferate, their successful operationaliz...
research
05/21/2020

Fair Classification via Unconstrained Optimization

Achieving the Bayes optimal binary classification rule subject to group ...
research
11/04/2020

On the Moral Justification of Statistical Parity

A crucial but often neglected aspect of algorithmic fairness is the ques...
research
02/24/2021

Classification with abstention but without disparities

Classification with abstention has gained a lot of attention in recent y...
research
07/06/2023

BaBE: Enhancing Fairness via Estimation of Latent Explaining Variables

We consider the problem of unfair discrimination between two groups and ...

Please sign up or login with your details

Forgot password? Click here to reset