Non-Comparative Fairness for Human-Auditing and Its Relation to Traditional Fairness Notions

06/29/2021
by   Mukund Telukunta, et al.
0

Bias evaluation in machine-learning based services (MLS) based on traditional algorithmic fairness notions that rely on comparative principles is practically difficult, making it necessary to rely on human auditor feedback. However, in spite of taking rigorous training on various comparative fairness notions, human auditors are known to disagree on various aspects of fairness notions in practice, making it difficult to collect reliable feedback. This paper offers a paradigm shift to the domain of algorithmic fairness via proposing a new fairness notion based on the principle of non-comparative justice. In contrary to traditional fairness notions where the outcomes of two individuals/groups are compared, our proposed notion compares the MLS' outcome with a desired outcome for each input. This desired outcome naturally describes a human auditor's expectation, and can be easily used to evaluate MLS on crowd-auditing platforms. We show that any MLS can be deemed fair from the perspective of comparative fairness (be it in terms of individual fairness, statistical parity, equal opportunity or calibration) if it is non-comparatively fair with respect to a fair auditor. We also show that the converse holds true in the context of individual fairness. Given that such an evaluation relies on the trustworthiness of the auditor, we also present an approach to identify fair and reliable auditors by estimating their biases with respect to a given set of sensitive attributes, as well as quantify the uncertainty in the estimation of biases within a given MLS. Furthermore, all of the above results are also validated on COMPAS, German credit and Adult Census Income datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/09/2020

On the Identification of Fair Auditors to Evaluate Recommender Systems based on a Novel Non-Comparative Fairness Notion

Decision-support systems are information systems that offer support to p...
research
02/16/2022

On Learning and Enforcing Latent Assessment Models using Binary Feedback from Human Auditors Regarding Black-Box Classifiers

Algorithmic fairness literature presents numerous mathematical notions a...
research
04/07/2023

Towards Inclusive Fairness Evaluation via Eliciting Disagreement Feedback from Non-Expert Stakeholders

Traditional algorithmic fairness notions rely on label feedback, which c...
research
05/08/2023

Runtime Monitoring of Dynamic Fairness Properties

A machine-learned system that is fair in static decision-making tasks ma...
research
03/08/2023

HappyMap: A Generalized Multi-calibration Method

Multi-calibration is a powerful and evolving concept originating in the ...
research
10/26/2021

Fair Sequential Selection Using Supervised Learning Models

We consider a selection problem where sequentially arrived applicants ap...
research
10/25/2022

I Prefer not to Say: Operationalizing Fair and User-guided Data Minimization

To grant users greater authority over their personal data, policymakers ...

Please sign up or login with your details

Forgot password? Click here to reset