Unfairness Discovery and Prevention For Few-Shot Regression

09/23/2020
by   Chen Zhao, et al.
0

We study fairness in supervised few-shot meta-learning models that are sensitive to discrimination (or bias) in historical data. A machine learning model trained based on biased data tends to make unfair predictions for users from minority groups. Although this problem has been studied before, existing methods mainly aim to detect and control the dependency effect of the protected variables (e.g. race, gender) on target prediction based on a large amount of training data. These approaches carry two major drawbacks that (1) lacking showing a global cause-effect visualization for all variables; (2) lacking generalization of both accuracy and fairness to unseen tasks. In this work, we first discover discrimination from data using a causal Bayesian knowledge graph which not only demonstrates the dependency of the protected variable on target but also indicates causal effects between all variables. Next, we develop a novel algorithm based on risk difference in order to quantify the discriminatory influence for each protected variable in the graph. Furthermore, to protect prediction from unfairness, a fast-adapted bias-control approach in meta-learning is proposed, which efficiently mitigates statistical disparity for each task and it thus ensures independence of protected attributes on predictions based on biased and few-shot data samples. Distinct from existing meta-learning models, group unfairness of tasks are efficiently reduced by leveraging the mean difference between (un)protected groups for regression problems. Through extensive experiments on both synthetic and real-world data sets, we demonstrate that our proposed unfairness discovery and prevention approaches efficiently detect discrimination and mitigate biases on model output as well as generalize both accuracy and fairness to unseen tasks with a limited amount of training samples.

READ FULL TEXT

page 1

page 2

page 7

research
09/23/2020

Fair Meta-Learning For Few-Shot Classification

Artificial intelligence nowadays plays an increasingly prominent role in...
research
04/06/2022

Marrying Fairness and Explainability in Supervised Learning

Machine learning algorithms that aid human decision-making may inadverte...
research
11/05/2018

FairMod - Making Predictive Models Discrimination Aware

Predictive models such as decision trees and neural networks may produce...
research
02/16/2020

Convex Fairness Constrained Model Using Causal Effect Estimators

Recent years have seen much research on fairness in machine learning. He...
research
08/28/2023

Fair Few-shot Learning with Auxiliary Sets

Recently, there has been a growing interest in developing machine learni...
research
12/21/2017

Interventions over Predictions: Reframing the Ethical Debate for Actuarial Risk Assessment

Actuarial risk assessments might be unduly perceived as a neutral way to...
research
11/22/2017

Calibration for the (Computationally-Identifiable) Masses

As algorithms increasingly inform and influence decisions made about ind...

Please sign up or login with your details

Forgot password? Click here to reset