Within-group fairness: A guidance for more sound between-group fairness

01/20/2023
by   Sara Kim, et al.
0

As they have a vital effect on social decision-making, AI algorithms not only should be accurate and but also should not pose unfairness against certain sensitive groups (e.g., non-white, women). Various specially designed AI algorithms to ensure trained AI models to be fair between sensitive groups have been developed. In this paper, we raise a new issue that between-group fair AI models could treat individuals in a same sensitive group unfairly. We introduce a new concept of fairness so-called within-group fairness which requires that AI models should be fair for those in a same sensitive group as well as those in different sensitive groups. We materialize the concept of within-group fairness by proposing corresponding mathematical definitions and developing learning algorithms to control within-group fairness and between-group fairness simultaneously. Numerical studies show that the proposed learning algorithms improve within-group fairness without sacrificing accuracy as well as between-group fairness.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/16/2022

Error Parity Fairness: Testing for Group Fairness in Regression Tasks

The applications of Artificial Intelligence (AI) surround decisions on i...
research
10/19/2018

Taking Advantage of Multitask Learning for Fair Classification

A central goal of algorithmic fairness is to reduce bias in automated de...
research
01/17/2022

Fair Group-Shared Representations with Normalizing Flows

The issue of fairness in machine learning stems from the fact that histo...
research
05/29/2023

Counterpart Fairness – Addressing Systematic between-group Differences in Fairness Evaluation

When using machine learning (ML) to aid decision-making, it is critical ...
research
02/07/2022

Learning fair representation with a parametric integral probability metric

As they have a vital effect on social decision-making, AI algorithms sho...
research
06/18/2020

Towards Threshold Invariant Fair Classification

Effective machine learning models can automatically learn useful informa...
research
05/21/2023

How to Capture Intersectional Fairness

In this work, we tackle the problem of intersectional group fairness in ...

Please sign up or login with your details

Forgot password? Click here to reset