BaBE: Enhancing Fairness via Estimation of Latent Explaining Variables

07/06/2023
by   Rūta Binkytė, et al.
0

We consider the problem of unfair discrimination between two groups and propose a pre-processing method to achieve fairness. Corrective methods like statistical parity usually lead to bad accuracy and do not really achieve fairness in situations where there is a correlation between the sensitive attribute S and the legitimate attribute E (explanatory variable) that should determine the decision. To overcome these drawbacks, other notions of fairness have been proposed, in particular, conditional statistical parity and equal opportunity. However, E is often not directly observable in the data, i.e., it is a latent variable. We may observe some other variable Z representing E, but the problem is that Z may also be affected by S, hence Z itself can be biased. To deal with this problem, we propose BaBE (Bayesian Bias Elimination), an approach based on a combination of Bayes inference and the Expectation-Maximization method, to estimate the most likely value of E for a given Z for each group. The decision can then be based directly on the estimated E. We show, by experiments on synthetic and real data sets, that our approach provides a good level of fairness as well as high accuracy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/04/2022

Parity-based Cumulative Fairness-aware Boosting

Data-driven AI systems can lead to discrimination on the basis of protec...
research
05/24/2017

Beyond Parity: Fairness Objectives for Collaborative Filtering

We study fairness in collaborative-filtering recommender systems, which ...
research
06/30/2017

From Parity to Preference-based Notions of Fairness in Classification

The adoption of automated, data-driven decision making in an ever expand...
research
02/05/2023

Improving Fair Training under Correlation Shifts

Model fairness is an essential element for Trustworthy AI. While many te...
research
02/23/2022

Fairness-Aware Naive Bayes Classifier for Data with Multiple Sensitive Features

Fairness-aware machine learning seeks to maximise utility in generating ...
research
10/26/2021

Fair Sequential Selection Using Supervised Learning Models

We consider a selection problem where sequentially arrived applicants ap...
research
11/13/2020

An example of prediction which complies with Demographic Parity and equalizes group-wise risks in the context of regression

Let (X, S, Y) ∈ℝ^p ×{1, 2}×ℝ be a triplet following some joint distribut...

Please sign up or login with your details

Forgot password? Click here to reset