Fair Robust Active Learning by Joint Inconsistency

09/22/2022
by   Tsung-Han Wu, et al.
0

Fair Active Learning (FAL) utilized active learning techniques to achieve high model performance with limited data and to reach fairness between sensitive groups (e.g., genders). However, the impact of the adversarial attack, which is vital for various safety-critical machine learning applications, is not yet addressed in FAL. Observing this, we introduce a novel task, Fair Robust Active Learning (FRAL), integrating conventional FAL and adversarial robustness. FRAL requires ML models to leverage active learning techniques to jointly achieve equalized performance on benign data and equalized robustness against adversarial attacks between groups. In this new task, previous FAL methods generally face the problem of unbearable computational burden and ineffectiveness. Therefore, we develop a simple yet effective FRAL strategy by Joint INconsistency (JIN). To efficiently find samples that can boost the performance and robustness of disadvantaged groups for labeling, our method exploits the prediction inconsistency between benign and adversarial samples as well as between standard and robust models. Extensive experiments under diverse datasets and sensitive groups demonstrate that our method not only achieves fairer performance on benign samples but also obtains fairer robustness under white-box PGD attacks compared with existing active learning and FAL baselines. We are optimistic that FRAL would pave a new path for developing safe and robust ML research and applications such as facial attribute recognition in biometrics systems.

READ FULL TEXT
research
12/05/2021

Robust Active Learning: Sample-Efficient Training of Robust Deep Learning Models

Active learning is an established technique to reduce the labeling cost ...
research
06/18/2012

Active Learning for Matching Problems

Effective learning of user preferences is critical to easing user burden...
research
01/06/2020

Fair Active Learning

Bias in training data and proxy attributes are probably the main reasons...
research
01/01/2021

Active Learning Under Malicious Mislabeling and Poisoning Attacks

Deep neural networks usually require large labeled datasets for training...
research
06/19/2019

Batch Active Learning Using Determinantal Point Processes

Data collection and labeling is one of the main challenges in employing ...
research
06/14/2023

Towards Balanced Active Learning for Multimodal Classification

Training multimodal networks requires a vast amount of data due to their...
research
01/13/2017

Active Self-Paced Learning for Cost-Effective and Progressive Face Identification

This paper aims to develop a novel cost-effective framework for face ide...

Please sign up or login with your details

Forgot password? Click here to reset