On Fair Classification with Mostly Private Sensitive Attributes

07/18/2022
by   Canyu Chen, et al.
0

Machine learning models have demonstrated promising performance in many areas. However, the concerns that they can be biased against specific groups hinder their adoption in high-stake applications. Thus it is essential to ensure fairness in machine learning models. Most of the previous efforts require access to sensitive attributes for mitigating bias. Nonetheless, it is often infeasible to obtain large scale of data with sensitive attributes due to people's increasing awareness of privacy and the legal compliance. Therefore, an important research question is how to make fair predictions under privacy? In this paper, we study a novel problem on fair classification in a semi-private setting, where most of the sensitive attributes are private and only a small amount of clean sensitive attributes are available. To this end, we propose a novel framework FairSP that can first learn to correct the noisy sensitive attributes under privacy guarantee via exploiting the limited clean sensitive attributes. Then, it jointly models the corrected and clean data in an adversarial way for debiasing and prediction. Theoretical analysis shows that the proposed model can ensure fairness when most of the sensitive attributes are private. Experimental results on real-world datasets demonstrate the effectiveness of the proposed model for making fair predictions under privacy and maintaining high accuracy.

READ FULL TEXT
research
04/29/2021

You Can Still Achieve Fairness Without Sensitive Attributes: Exploring Biases in Non-Sensitive Features

Though machine learning models are achieving great success, ex-tensive s...
research
02/16/2023

Group Fairness with Uncertainty in Sensitive Attributes

We consider learning a fair predictive model when sensitive attributes a...
research
09/02/2022

Exploiting Fairness to Enhance Sensitive Attributes Reconstruction

In recent years, a growing body of work has emerged on how to learn mach...
research
09/26/2020

Differentially Private and Fair Deep Learning: A Lagrangian Dual Approach

A critical concern in data-driven decision making is to build models who...
research
06/04/2022

When Personalization Harms: Reconsidering the Use of Group Attributes in Prediction

The standard approach to personalization in machine learning consists of...
research
11/15/2020

FAIR: Fair Adversarial Instance Re-weighting

With growing awareness of societal impact of artificial intelligence, fa...
research
06/08/2022

How unfair is private learning ?

As machine learning algorithms are deployed on sensitive data in critica...

Please sign up or login with your details

Forgot password? Click here to reset