Mitigating Algorithmic Bias with Limited Annotations

07/20/2022
by   Guanchu Wang, et al.
0

Existing work on fairness modeling commonly assumes that sensitive attributes for all instances are fully available, which may not be true in many real-world applications due to the high cost of acquiring sensitive information. When sensitive attributes are not disclosed or available, it is needed to manually annotate a small part of the training data to mitigate bias. However, the skewed distribution across different sensitive groups preserves the skewness of the original dataset in the annotated subset, which leads to non-optimal bias mitigation. To tackle this challenge, we propose Active Penalization Of Discrimination (APOD), an interactive framework to guide the limited annotations towards maximally eliminating the effect of algorithmic bias. The proposed APOD integrates discrimination penalization with active instance selection to efficiently utilize the limited annotation budget, and it is theoretically proved to be capable of bounding the algorithmic bias. According to the evaluation on five benchmark datasets, APOD outperforms the state-of-the-arts baseline methods under the limited annotation budget, and shows comparable performance to fully annotated bias mitigation, which demonstrates that APOD could benefit real-world applications when sensitive information is limited.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/23/2021

Fairness via Representation Neutralization

Existing bias mitigation methods for DNN models primarily work on learni...
research
06/07/2023

M^3Fair: Mitigating Bias in Healthcare Data through Multi-Level and Multi-Sensitive-Attribute Reweighting Method

In the data-driven artificial intelligence paradigm, models heavily rely...
research
09/03/2020

FairGNN: Eliminating the Discrimination in Graph Neural Networks with Limited Sensitive Attribute Information

Graph neural networks (GNNs) have shown great power in modeling graph st...
research
05/10/2021

Improving Fairness of AI Systems with Lossless De-biasing

In today's society, AI systems are increasingly used to make critical de...
research
04/13/2022

Mitigating Bias in Facial Analysis Systems by Incorporating Label Diversity

Facial analysis models are increasingly applied in real-world applicatio...
research
09/14/2020

Active Fairness Instead of Unawareness

The possible risk that AI systems could promote discrimination by reprod...
research
06/15/2021

Simon Says: Evaluating and Mitigating Bias in Pruned Neural Networks with Knowledge Distillation

In recent years the ubiquitous deployment of AI has posed great concerns...

Please sign up or login with your details

Forgot password? Click here to reset