Fair Visual Recognition via Intervention with Proxy Features

11/02/2022
by   Yi Zhang, et al.
0

Deep learning models often learn to make predictions that rely on sensitive social attributes like gender and race, which poses significant fairness risks, especially in societal applications, e.g., hiring, banking, and criminal justice. Existing work tackles this issue by minimizing information about social attributes in models for debiasing. However, the high correlation between target task and social attributes makes bias mitigation incompatible with target task accuracy. Recalling that model bias arises because the learning of features in regard to bias attributes (i.e., bias features) helps target task optimization, we explore the following research question: Can we leverage proxy features to replace the role of bias feature in target task optimization for debiasing? To this end, we propose Proxy Debiasing, to first transfer the target task's learning of bias information from bias features to artificial proxy features, and then employ causal intervention to eliminate proxy features in inference. The key idea of Proxy Debiasing is to design controllable proxy features to on one hand replace bias features in contributing to target task during the training stage, and on the other hand easily to be removed by intervention during the inference stage. This guarantees the elimination of bias features without affecting the target information, thus addressing the fairness-accuracy paradox in previous debiasing solutions. We apply Proxy Debiasing to several benchmark datasets, and achieve significant improvements over the state-of-the-art debiasing methods in both of accuracy and fairness.

READ FULL TEXT

page 1

page 2

page 4

page 7

research
08/13/2023

Benign Shortcut for Debiasing: Fair Visual Recognition via Intervention with Shortcut Features

Machine learning models often learn to make predictions that rely on sen...
research
06/23/2021

Fairness via Representation Neutralization

Existing bias mitigation methods for DNN models primarily work on learni...
research
09/10/2021

Fairness without the sensitive attribute via Causal Variational Autoencoder

In recent years, most fairness strategies in machine learning models foc...
research
07/09/2021

Multiaccurate Proxies for Downstream Fairness

We study the problem of training a model that must obey demographic fair...
research
05/22/2023

Risk Scores, Label Bias, and Everything but the Kitchen Sink

In designing risk assessment algorithms, many scholars promote a "kitche...
research
05/23/2023

FITNESS: A Causal De-correlation Approach for Mitigating Bias in Machine Learning Software

Software built on top of machine learning algorithms is becoming increas...
research
02/17/2022

Gradient Based Activations for Accurate Bias-Free Learning

Bias mitigation in machine learning models is imperative, yet challengin...

Please sign up or login with your details

Forgot password? Click here to reset