Log In Sign Up

Debiasing classifiers: is reality at variance with expectation?

by   Ashrya Agrawal, et al.

Many methods for debiasing classifiers have been proposed, but their effectiveness in practice remains unclear. We evaluate the performance of pre-processing and post-processing debiasers for improving fairness in random forest classifiers trained on a suite of data sets. Specifically, we study how these debiasers generalize with respect to both out-of-sample test error for computing fairness – performance and fairness – fairness trade-offs, and on the change in other fairness metrics that were not explicitly optimised. Our results demonstrate that out-of-sample performance on fairness and performance can vary substantially and unexpectedly. Moreover, the variance in estimation arises from class imbalances with respect to both the outcome and the protected classes. Our results highlight the importance of evaluating out-of-sample performance in practical usage.


page 8

page 9


Impact of Data Processing on Fairness in Supervised Learning

We study the impact of pre and post processing for reducing discriminati...

Unaware Fairness: Hierarchical Random Forest for Protected Classes

Procedural fairness has been a public concern, which leads to controvers...

fairlib: A Unified Framework for Assessing and Improving Classification Fairness

This paper presents fairlib, an open-source framework for assessing and ...

Beyond Adult and COMPAS: Fairness in Multi-Class Prediction

We consider the problem of producing fair probabilistic classifiers for ...

Your fairness may vary: Group fairness of pretrained language models in toxic text classification

We study the performance-fairness trade-off in more than a dozen fine-tu...

Genetic programming approaches to learning fair classifiers

Society has come to rely on algorithms like classifiers for important de...

Adaptive Fairness Improvement Based on Causality Analysis

Given a discriminating neural network, the problem of fairness improveme...