Counterpart Fairness – Addressing Systematic between-group Differences in Fairness Evaluation

05/29/2023
by   Yifei Wang, et al.
3

When using machine learning (ML) to aid decision-making, it is critical to ensure that an algorithmic decision is fair, i.e., it does not discriminate against specific individuals/groups, particularly those from underprivileged populations. Existing group fairness methods require equal group-wise measures, which however fails to consider systematic between-group differences. The confounding factors, which are non-sensitive variables but manifest systematic differences, can significantly affect fairness evaluation. To mitigate this problem, we believe that a fairness measurement should be based on the comparison between counterparts (i.e., individuals who are similar to each other with respect to the task of interest) from different groups, whose group identities cannot be distinguished algorithmically by exploring confounding factors. We have developed a propensity-score-based method for identifying counterparts, which prevents fairness evaluation from comparing "oranges" with "apples". In addition, we propose a counterpart-based statistical fairness index, termed Counterpart-Fairness (CFair), to assess fairness of ML models. Empirical studies on the Medical Information Mart for Intensive Care (MIMIC)-IV database were conducted to validate the effectiveness of CFair. We publish our code at <https://github.com/zhengyjo/CFair>.

READ FULL TEXT
research
01/20/2023

Within-group fairness: A guidance for more sound between-group fairness

As they have a vital effect on social decision-making, AI algorithms not...
research
06/01/2021

The zoo of Fairness metrics in Machine Learning

In recent years, the problem of addressing fairness in Machine Learning ...
research
02/05/2023

Fair Spatial Indexing: A paradigm for Group Spatial Fairness

Machine learning (ML) is playing an increasing role in decision-making t...
research
06/15/2023

FFB: A Fair Fairness Benchmark for In-Processing Group Fairness Methods

This paper introduces the Fair Fairness Benchmark (), a benchmarking fra...
research
09/15/2023

Adaptive Priority Reweighing for Generalizing Fairness Improvement

With the increasing penetration of machine learning applications in crit...
research
02/08/2022

Group Fairness Is Not Derivable From Justice: a Mathematical Proof

We argue that an imperfect criminal law procedure cannot be group-fair, ...
research
10/04/2021

Fairness and underspecification in acoustic scene classification: The case for disaggregated evaluations

Underspecification and fairness in machine learning (ML) applications ha...

Please sign up or login with your details

Forgot password? Click here to reset