Achieving Equalized Odds by Resampling Sensitive Attributes

06/08/2020
by   Yaniv Romano, et al.
0

We present a flexible framework for learning predictive models that approximately satisfy the equalized odds notion of fairness. This is achieved by introducing a general discrepancy functional that rigorously quantifies violations of this criterion. This differentiable functional is used as a penalty driving the model parameters towards equalized odds. To rigorously evaluate fitted models, we develop a formal hypothesis test to detect whether a prediction rule violates this property, the first such test in the literature. Both the model fitting and hypothesis testing leverage a resampled version of the sensitive attribute obeying equalized odds, by construction. We demonstrate the applicability and validity of the proposed framework both in regression and multi-class classification problems, reporting improved performance over state-of-the-art methods. Lastly, we show how to incorporate techniques for equitable uncertainty quantification—unbiased for each group under study—to communicate the results of the data analysis in exact terms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/06/2021

A Unified Approach to Hypothesis Testing for Functional Linear Models

We develop a unified approach to hypothesis testing for various types of...
research
06/02/2021

Testing Group Fairness via Optimal Transport Projections

We present a statistical testing framework to detect if a given machine ...
research
08/16/2022

Error Parity Fairness: Testing for Group Fairness in Regression Tasks

The applications of Artificial Intelligence (AI) surround decisions on i...
research
05/05/2023

Statistical Inference for Fairness Auditing

Before deploying a black-box model in high-stakes problems, it is import...
research
02/16/2023

Group Fairness with Uncertainty in Sensitive Attributes

We consider learning a fair predictive model when sensitive attributes a...
research
08/19/2021

EqGNN: Equalized Node Opportunity in Graphs

Graph neural networks (GNNs), has been widely used for supervised learni...
research
07/18/2020

A Distributionally Robust Approach to Fair Classification

We propose a distributionally robust logistic regression model with an u...

Please sign up or login with your details

Forgot password? Click here to reset