FairALM: Augmented Lagrangian Method for Training Fair Models with Little Regret

04/03/2020
by   Vishnu Suresh Lokhande, et al.
0

Algorithmic decision making based on computer vision and machine learning technologies continue to permeate our lives. But issues related to biases of these models and the extent to which they treat certain segments of the population unfairly, have led to concern in the general public. It is now accepted that because of biases in the datasets we present to the models, a fairness-oblivious training will lead to unfair models. An interesting topic is the study of mechanisms via which the de novo design or training of the model can be informed by fairness measures. Here, we study mechanisms that impose fairness concurrently while training the model. While existing fairness based approaches in vision have largely relied on training adversarial modules together with the primary classification/regression task, in an effort to remove the influence of the protected attribute or variable, we show how ideas based on well-known optimization concepts can provide a simpler alternative. In our proposed scheme, imposing fairness just requires specifying the protected attribute and utilizing our optimization routine. We provide a detailed technical analysis and present experiments demonstrating that various fairness measures from the literature can be reliably imposed on a number of training tasks in vision in a manner that is interpretable.

READ FULL TEXT

page 6

page 8

page 18

page 20

research
04/27/2021

Multi-Fair Pareto Boosting

Fairness-aware machine learning for multiple protected at-tributes (refe...
research
07/25/2023

An Empirical Study on Fairness Improvement with Multiple Protected Attributes

Existing research mostly improves the fairness of Machine Learning (ML) ...
research
07/06/2023

When Fair Classification Meets Noisy Protected Attributes

The operationalization of algorithmic fairness comes with several practi...
research
03/16/2022

Measuring Fairness of Text Classifiers via Prediction Sensitivity

With the rapid growth in language processing applications, fairness has ...
research
03/14/2022

Achieving Downstream Fairness with Geometric Repair

Consider a scenario where some upstream model developer must train a fai...
research
02/23/2020

Fair Adversarial Networks

The influence of human judgement is ubiquitous in datasets used across t...
research
06/22/2022

FairGrad: Fairness Aware Gradient Descent

We tackle the problem of group fairness in classification, where the obj...

Please sign up or login with your details

Forgot password? Click here to reset