Towards Threshold Invariant Fair Classification

06/18/2020
by   Mingliang Chen, et al.
0

Effective machine learning models can automatically learn useful information from a large quantity of data and provide decisions in a high accuracy. These models may, however, lead to unfair predictions in certain sense among the population groups of interest, where the grouping is based on such sensitive attributes as race and gender. Various fairness definitions, such as demographic parity and equalized odds, were proposed in prior art to ensure that decisions guided by the machine learning models are equitable. Unfortunately, the "fair" model trained with these fairness definitions is threshold sensitive, i.e., the condition of fairness may no longer hold true when tuning the decision threshold. This paper introduces the notion of threshold invariant fairness, which enforces equitable performances across different groups independent of the decision threshold. To achieve this goal, this paper proposes to equalize the risk distributions among the groups via two approximation methods. Experimental results demonstrate that the proposed methodology is effective to alleviate the threshold sensitivity in machine learning models designed to achieve fairness.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/20/2023

Within-group fairness: A guidance for more sound between-group fairness

As they have a vital effect on social decision-making, AI algorithms not...
research
08/16/2022

Error Parity Fairness: Testing for Group Fairness in Regression Tasks

The applications of Artificial Intelligence (AI) surround decisions on i...
research
05/08/2020

In Pursuit of Interpretable, Fair and Accurate Machine Learning for Criminal Recidivism Prediction

In recent years, academics and investigative journalists have criticized...
research
07/31/2018

The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning

In one broad class of supervised machine learning problems, researchers ...
research
06/23/2021

Fairness for Image Generation with Uncertain Sensitive Attributes

This work tackles the issue of fairness in the context of generative pro...
research
05/10/2021

Joint Fairness Model with Applications to Risk Predictions for Under-represented Populations

Under-representation of certain populations, based on gender, race/ethni...
research
05/11/2023

A statistical approach to detect sensitive features in a group fairness setting

The use of machine learning models in decision support systems with high...

Please sign up or login with your details

Forgot password? Click here to reset