GetFair: Generalized Fairness Tuning of Classification Models

08/01/2022
by   Sandipan Sikdar, et al.
0

We present GetFair, a novel framework for tuning fairness of classification models. The fair classification problem deals with training models for a given classification task where data points have sensitive attributes. The goal of fair classification models is to not only generate accurate classification results but also to prevent discrimination against subpopulations (i.e., individuals with a specific value for the sensitive attribute). Existing methods for enhancing fairness of classification models, however, are often specifically designed for a particular fairness metric or a classifier model. They may also not be suitable for scenarios with incomplete training data or where optimizing for multiple fairness metrics is important. GetFair represents a general solution to this problem. The GetFair approach works in the following way: First, a given classifier is trained on training data without any fairness objective. This is followed by a reinforcement learning inspired tuning procedure which updates the parameters of the learned model on a given fairness objective. This disentangles classifier training from fairness tuning, making our framework more general and allowing for the adoption of any parameterized classifier model. Because fairness metrics are designed as reward functions during tuning, GetFair generalizes across any fairness metric. We demonstrate the generalizability of GetFair via evaluation over a benchmark suite of datasets, classification models, and fairness metrics. In addition, GetFair can also be deployed in settings where the training data is incomplete or the classifier needs to be tuned on multiple fairness metrics. GetFair not only contributes a flexible method to the repertoire of tools available to improve the fairness of classification models, it also seamlessly adapts to settings where existing fair classification methods may not be suitable or applicable.

READ FULL TEXT
research
04/29/2021

You Can Still Achieve Fairness Without Sensitive Attributes: Exploring Biases in Non-Sensitive Features

Though machine learning models are achieving great success, ex-tensive s...
research
02/02/2023

Hyper-parameter Tuning for Fair Classification without Sensitive Attribute Access

Fair machine learning methods seek to train models that balance model pe...
research
05/02/2023

On the Impact of Data Quality on Image Classification Fairness

With the proliferation of algorithmic decision-making, increased scrutin...
research
01/27/2023

Variance, Self-Consistency, and Arbitrariness in Fair Classification

In fair classification, it is common to train a model, and to compare an...
research
12/13/2022

Model-Free Approach to Fair Solar PV Curtailment Using Reinforcement Learning

The rapid adoption of residential solar photovoltaics (PV) has resulted ...
research
02/16/2021

Evaluating Fairness of Machine Learning Models Under Uncertain and Incomplete Information

Training and evaluation of fair classifiers is a challenging problem. Th...
research
07/28/2019

Wasserstein Fair Classification

We propose an approach to fair classification that enforces independence...

Please sign up or login with your details

Forgot password? Click here to reset