On Learning and Enforcing Latent Assessment Models using Binary Feedback from Human Auditors Regarding Black-Box Classifiers

02/16/2022
by   Mukund Telukunta, et al.
0

Algorithmic fairness literature presents numerous mathematical notions and metrics, and also points to a tradeoff between them while satisficing some or all of them simultaneously. Furthermore, the contextual nature of fairness notions makes it difficult to automate bias evaluation in diverse algorithmic systems. Therefore, in this paper, we propose a novel model called latent assessment model (LAM) to characterize binary feedback provided by human auditors, by assuming that the auditor compares the classifier's output to his or her own intrinsic judgment for each input. We prove that individual and group fairness notions are guaranteed as long as the auditor's intrinsic judgments inherently satisfy the fairness notion at hand, and are relatively similar to the classifier's evaluations. We also demonstrate this relationship between LAM and traditional fairness notions on three well-known datasets, namely COMPAS, German credit and Adult Census Income datasets. Furthermore, we also derive the minimum number of feedback samples needed to obtain PAC learning guarantees to estimate LAM for black-box classifiers. These guarantees are also validated via training standard machine learning algorithms on real binary feedback elicited from 400 human auditors regarding COMPAS.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/29/2021

Non-Comparative Fairness for Human-Auditing and Its Relation to Traditional Fairness Notions

Bias evaluation in machine-learning based services (MLS) based on tradit...
research
04/07/2023

Towards Inclusive Fairness Evaluation via Eliciting Disagreement Feedback from Non-Expert Stakeholders

Traditional algorithmic fairness notions rely on label feedback, which c...
research
09/09/2020

On the Identification of Fair Auditors to Evaluate Recommender Systems based on a Novel Non-Comparative Fairness Notion

Decision-support systems are information systems that offer support to p...
research
03/18/2019

Multi-Differential Fairness Auditor for Black Box Classifiers

Machine learning algorithms are increasingly involved in sensitive decis...
research
11/17/2020

Augmented Fairness: An Interpretable Model Augmenting Decision-Makers' Fairness

We propose a model-agnostic approach for mitigating the prediction bias ...
research
09/03/2020

Fairness in the Eyes of the Data: Certifying Machine-Learning Models

We present a framework that allows to certify the fairness degree of a m...
research
02/07/2023

From Utilitarian to Rawlsian Designs for Algorithmic Fairness

There is a lack of consensus within the literature as to how `fairness' ...

Please sign up or login with your details

Forgot password? Click here to reset