Addressing multiple metrics of group fairness in data-driven decision making

03/10/2020
by   Marius Miron, et al.
1

The Fairness, Accountability, and Transparency in Machine Learning (FAT-ML) literature proposes a varied set of group fairness metrics to measure discrimination against socio-demographic groups that are characterized by a protected feature, such as gender or race.Such a system can be deemed as either fair or unfair depending on the choice of the metric. Several metrics have been proposed, some of them incompatible with each other.We do so empirically, by observing that several of these metrics cluster together in two or three main clusters for the same groups and machine learning methods. In addition, we propose a robust way to visualize multidimensional fairness in two dimensions through a Principal Component Analysis (PCA) of the group fairness metrics. Experimental results on multiple datasets show that the PCA decomposition explains the variance between the metrics with one to three components.

READ FULL TEXT

page 10

page 12

research
09/09/2021

A Systematic Approach to Group Fairness in Automated Decision Making

While the field of algorithmic fairness has brought forth many ways to m...
research
06/06/2022

Distributive Justice as the Foundational Premise of Fair ML: Unification, Extension, and Interpretation of Group Fairness Metrics

Group fairness metrics are an established way of assessing the fairness ...
research
02/11/2018

Convex Formulations for Fair Principal Component Analysis

Though there is a growing body of literature on fairness for supervised ...
research
11/27/2020

Black Loans Matter: Distributionally Robust Fairness for Fighting Subgroup Discrimination

Algorithmic fairness in lending today relies on group fairness metrics f...
research
01/05/2021

Characterizing Intersectional Group Fairness with Worst-Case Comparisons

Machine Learning or Artificial Intelligence algorithms have gained consi...
research
06/15/2023

Harvard Glaucoma Fairness: A Retinal Nerve Disease Dataset for Fairness Learning and Fair Identity Normalization

Fairness in machine learning is important for societal well-being, but l...
research
07/11/2023

Towards A Scalable Solution for Improving Multi-Group Fairness in Compositional Classification

Despite the rich literature on machine learning fairness, relatively lit...

Please sign up or login with your details

Forgot password? Click here to reset