Fair Logistic Regression: An Adversarial Perspective

03/10/2019
by   Ashkan Rezaei, et al.
0

Fair prediction methods have primarily been built around existing classification techniques using pre-processing methods, post-hoc adjustments, reduction-based constructions, or deep learning procedures. We investigate a new approach to fair data-driven decision making by designing predictors with fairness requirements integrated into their core formulations. We augment a game-theoretic construction of the logistic regression model with fairness constraints, producing a novel prediction model that robustly and fairly minimizes the logarithmic loss. We demonstrate the advantages of our approach on a range of benchmark datasets for fairness.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/12/2023

On Testing and Comparing Fair classifiers under Data Bias

In this paper, we consider a theoretical model for injecting data bias, ...
research
06/07/2017

A Convex Framework for Fair Regression

We introduce a flexible family of fairness regularizers for (linear and ...
research
10/25/2022

I Prefer not to Say: Operationalizing Fair and User-guided Data Minimization

To grant users greater authority over their personal data, policymakers ...
research
02/07/2022

Learning fair representation with a parametric integral probability metric

As they have a vital effect on social decision-making, AI algorithms sho...
research
11/04/2022

Fairness-aware Regression Robust to Adversarial Attacks

In this paper, we take a first step towards answering the question of ho...
research
10/18/2019

Optimization Hierarchy for Fair Statistical Decision Problems

Data-driven decision-making has drawn scrutiny from policy makers due to...
research
02/07/2022

SLIDE: a surrogate fairness constraint to ensure fairness consistency

As they have a vital effect on social decision makings, AI algorithms sh...

Please sign up or login with your details

Forgot password? Click here to reset