Fair Few-shot Learning with Auxiliary Sets

08/28/2023
by   Song Wang, et al.
0

Recently, there has been a growing interest in developing machine learning (ML) models that can promote fairness, i.e., eliminating biased predictions towards certain populations (e.g., individuals from a specific demographic group). Most existing works learn such models based on well-designed fairness constraints in optimization. Nevertheless, in many practical ML tasks, only very few labeled data samples can be collected, which can lead to inferior fairness performance. This is because existing fairness constraints are designed to restrict the prediction disparity among different sensitive groups, but with few samples, it becomes difficult to accurately measure the disparity, thus rendering ineffective fairness optimization. In this paper, we define the fairness-aware learning task with limited training samples as the fair few-shot learning problem. To deal with this problem, we devise a novel framework that accumulates fairness-aware knowledge across different meta-training tasks and then generalizes the learned knowledge to meta-test tasks. To compensate for insufficient training samples, we propose an essential strategy to select and leverage an auxiliary set for each meta-test task. These auxiliary sets contain several labeled training samples that can enhance the model performance regarding fairness in meta-test tasks, thereby allowing for the transfer of learned useful fairness-oriented knowledge to meta-test tasks. Furthermore, we conduct extensive experiments on three real-world datasets to validate the superiority of our framework against the state-of-the-art baselines.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/23/2020

Fair Meta-Learning For Few-Shot Classification

Artificial intelligence nowadays plays an increasingly prominent role in...
research
01/30/2023

Fairness and Accuracy under Domain Generalization

As machine learning (ML) algorithms are increasingly used in high-stakes...
research
09/23/2020

Unfairness Discovery and Prevention For Few-Shot Regression

We study fairness in supervised few-shot meta-learning models that are s...
research
03/28/2022

A Framework of Meta Functional Learning for Regularising Knowledge Transfer

Machine learning classifiers' capability is largely dependent on the sca...
research
01/15/2022

Training Fair Deep Neural Networks by Balancing Influence

Most fair machine learning methods either highly rely on the sensitive i...
research
01/21/2021

Stress Testing of Meta-learning Approaches for Few-shot Learning

Meta-learning (ML) has emerged as a promising learning method under reso...
research
11/29/2021

Learning Fair Classifiers with Partially Annotated Group Labels

Recently, fairness-aware learning have become increasingly crucial, but ...

Please sign up or login with your details

Forgot password? Click here to reset