Learning Fair Models without Sensitive Attributes: A Generative Approach

03/30/2022
by   Huaisheng Zhu, et al.
0

Most existing fair classifiers rely on sensitive attributes to achieve fairness. However, for many scenarios, we cannot obtain sensitive attributes due to privacy and legal issues. The lack of sensitive attributes challenges many existing works. Though we lack sensitive attributes, for many applications, there usually exists features or information of various formats that are relevant to sensitive attributes. For example, a personal purchase history can reflect his or her race, which would be helpful for learning fair classifiers on race. However, the work on exploring relevant features for learning fair models without sensitive attributes is rather limited. Therefore, in this paper, we study a novel problem of learning fair models without sensitive attributes by exploring relevant features. We propose a probabilistic generative framework to effectively estimate the sensitive attribute from the training data with relevant features in various formats and utilize the estimated sensitive attribute information to learn fair models. Experimental results on real-world datasets show the effectiveness of our framework in terms of both accuracy and fairness.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/29/2021

You Can Still Achieve Fairness Without Sensitive Attributes: Exploring Biases in Non-Sensitive Features

Though machine learning models are achieving great success, ex-tensive s...
research
02/02/2023

Hyper-parameter Tuning for Fair Classification without Sensitive Attribute Access

Fair machine learning methods seek to train models that balance model pe...
research
06/08/2018

Blind Justice: Fairness with Encrypted Sensitive Attributes

Recent work has explored how to train machine learning models which do n...
research
05/24/2021

MultiFair: Multi-Group Fairness in Machine Learning

Algorithmic fairness is becoming increasingly important in data mining a...
research
05/05/2022

On Disentangled and Locally Fair Representations

We study the problem of performing classification in a manner that is fa...
research
10/18/2022

Towards Fair Classification against Poisoning Attacks

Fair classification aims to stress the classification models to achieve ...
research
11/15/2022

MMD-B-Fair: Learning Fair Representations with Statistical Testing

We introduce a method, MMD-B-Fair, to learn fair representations of data...

Please sign up or login with your details

Forgot password? Click here to reset