DeepAI AI Chat
Log In Sign Up

Fair quantile regression

by   Dana Yang, et al.
Yale University

Quantile regression is a tool for learning conditional distributions. In this paper we study quantile regression in the setting where a protected attribute is unavailable when fitting the model. This can lead to "unfair" quantile estimators for which the effective quantiles are very different for the subpopulations defined by the protected attribute. We propose a procedure for adjusting the estimator on a heldout sample where the protected attribute is available. The main result of the paper is an empirical process analysis showing that the adjustment leads to a fair estimator for which the target quantiles are brought into balance, in a statistical sense that we call √(n)-fairness. We illustrate the ideas and adjustment procedure on a dataset of 200,000 live births, where the objective is to characterize the dependence of the birth weights of the babies on demographic attributes of the birth mother; the protected attribute is the mother's race.


page 1

page 2

page 3

page 4


Fair Balance: Mitigating Machine Learning Bias Against Multiple Protected Attributes With Data Balancing

This paper aims to improve machine learning fairness on multiple protect...

Fair Regression: Quantitative Definitions and Reduction-based Algorithms

In this paper, we study the prediction of a real-valued target, such as ...

Conformalized Fairness via Quantile Regression

Algorithmic fairness has received increased attention in socially sensit...

A Note on the Significance Adjustment for FA*IR with Two Protected Groups

In this report we provide an improvement of the significance adjustment ...

Neural Styling for Interpretable Fair Representations

We observe a rapid increase in machine learning models for learning data...

Fair Attribute Classification through Latent Space De-biasing

Fairness in visual recognition is becoming a prominent and critical topi...