Towards Accuracy-Fairness Paradox: Adversarial Example-based Data Augmentation for Visual Debiasing

by   Yi Zhang, et al.

Machine learning fairness concerns about the biases towards certain protected or sensitive group of people when addressing the target tasks. This paper studies the debiasing problem in the context of image classification tasks. Our data analysis on facial attribute recognition demonstrates (1) the attribution of model bias from imbalanced training data distribution and (2) the potential of adversarial examples in balancing data distribution. We are thus motivated to employ adversarial example to augment the training data for visual debiasing. Specifically, to ensure the adversarial generalization as well as cross-task transferability, we propose to couple the operations of target task classifier training, bias task classifier training, and adversarial example generation. The generated adversarial examples supplement the target task training dataset via balancing the distribution over bias variables in an online fashion. Results on simulated and real-world debiasing experiments demonstrate the effectiveness of the proposed solution in simultaneously improving model accuracy and fairness. Preliminary experiment on few-shot learning further shows the potential of adversarial attack-based pseudo sample generation as alternative solution to make up for the training data lackage.



There are no comments yet.


page 1

page 2

page 3

page 4

page 5

page 6

page 8

page 10


Training Augmentation with Adversarial Examples for Robust Speech Recognition

This paper explores the use of adversarial examples in training speech r...

Attribution-driven Causal Analysis for Detection of Adversarial Examples

Attribution methods have been developed to explain the decision of a mac...

Adversarial Examples for Good: Adversarial Examples Guided Imbalanced Learning

Adversarial examples are inputs for machine learning models that have be...

Recovering from Biased Data: Can Fairness Constraints Improve Accuracy?

Multiple fairness constraints have been proposed in the literature, moti...

Estimating and Improving Fairness with Adversarial Learning

Fairness and accountability are two essential pillars for trustworthy Ar...

Cross-Entropy Loss and Low-Rank Features Have Responsibility for Adversarial Examples

State-of-the-art neural networks are vulnerable to adversarial examples;...

Does Data Repair Lead to Fair Models? Curating Contextually Fair Data To Reduce Model Bias

Contextual information is a valuable cue for Deep Neural Networks (DNNs)...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.