Group Fairness with Uncertainty in Sensitive Attributes

02/16/2023
by   Abhin Shah, et al.
0

We consider learning a fair predictive model when sensitive attributes are uncertain, say, due to a limited amount of labeled data, collection bias, or privacy mechanism. We formulate the problem, for the independence notion of fairness, using the information bottleneck principle, and propose a robust optimization with respect to an uncertainty set of the sensitive attributes. As an illustrative case, we consider the joint Gaussian model and reduce the task to a quadratically constrained quadratic problem (QCQP). To ensure a strict fairness guarantee, we propose a robust QCQP and completely characterize its solution with an intuitive geometric understanding. When uncertainty arises due to limited labeled sensitive attributes, our analysis reveals the contribution of each new sample towards the optimal performance achieved with unlimited access to labeled sensitive attributes. This allows us to identify non-trivial regimes where uncertainty incurs no performance loss of the proposed algorithm while continuing to guarantee strict fairness. We also propose a bootstrap-based generic algorithm that is applicable beyond the Gaussian case. We demonstrate the value of our analysis and method on synthetic data as well as real-world classification and regression tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/18/2022

On Fair Classification with Mostly Private Sensitive Attributes

Machine learning models have demonstrated promising performance in many ...
research
10/13/2017

Two-stage Algorithm for Fairness-aware Machine Learning

Algorithmic decision making process now affects many aspects of our live...
research
09/02/2022

Exploiting Fairness to Enhance Sensitive Attributes Reconstruction

In recent years, a growing body of work has emerged on how to learn mach...
research
11/11/2022

Practical Approaches for Fair Learning with Multitype and Multivariate Sensitive Attributes

It is important to guarantee that machine learning algorithms deployed i...
research
10/19/2020

Can I Trust My Fairness Metric? Assessing Fairness with Unlabeled Data and Bayesian Inference

We investigate the problem of reliably assessing group fairness when lab...
research
02/07/2020

Oblivious Data for Fairness with Kernels

We investigate the problem of algorithmic fairness in the case where sen...
research
06/08/2020

Achieving Equalized Odds by Resampling Sensitive Attributes

We present a flexible framework for learning predictive models that appr...

Please sign up or login with your details

Forgot password? Click here to reset