On the Identification of Fair Auditors to Evaluate Recommender Systems based on a Novel Non-Comparative Fairness Notion

09/09/2020
by   Mukund Telukunta, et al.
0

Decision-support systems are information systems that offer support to people's decisions in various applications such as judiciary, real-estate and banking sectors. Lately, these support systems have been found to be discriminatory in the context of many practical deployments. In an attempt to evaluate and mitigate these biases, algorithmic fairness literature has been nurtured using notions of comparative justice, which relies primarily on comparing two/more individuals or groups within the society that is supported by such systems. However, such a fairness notion is not very useful in the identification of fair auditors who are hired to evaluate latent biases within decision-support systems. As a solution, we introduce a paradigm shift in algorithmic fairness via proposing a new fairness notion based on the principle of non-comparative justice. Assuming that the auditor makes fairness evaluations based on some (potentially unknown) desired properties of the decision-support system, the proposed fairness notion compares the system's outcome with that of the auditor's desired outcome. We show that the proposed fairness notion also provides guarantees in terms of comparative fairness notions by proving that any system can be deemed fair from the perspective of comparative fairness (e.g. individual fairness and statistical parity) if it is non-comparatively fair with respect to an auditor who has been deemed fair with respect to the same fairness notions. We also show that the converse holds true in the context of individual fairness. A brief discussion is also presented regarding how our fairness notion can be used to identify fair and reliable auditors, and how we can use them to quantify biases in decision-support systems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/29/2021

Non-Comparative Fairness for Human-Auditing and Its Relation to Traditional Fairness Notions

Bias evaluation in machine-learning based services (MLS) based on tradit...
research
10/25/2022

I Prefer not to Say: Operationalizing Fair and User-guided Data Minimization

To grant users greater authority over their personal data, policymakers ...
research
09/11/2023

Re-formalization of Individual Fairness

The notion of individual fairness is a formalization of an ethical princ...
research
06/30/2020

On the Applicability of ML Fairness Notions

ML-based predictive systems are increasingly used to support decisions w...
research
08/23/2019

Fairness in Deep Learning: A Computational Perspective

Deep learning is increasingly being used in high-stake decision making a...
research
02/16/2022

On Learning and Enforcing Latent Assessment Models using Binary Feedback from Human Auditors Regarding Black-Box Classifiers

Algorithmic fairness literature presents numerous mathematical notions a...
research
08/09/2023

Fairness Notions in DAG-based DLTs

This paper investigates the issue of fairness in Distributed Ledger Tech...

Please sign up or login with your details

Forgot password? Click here to reset