DeepAI AI Chat
Log In Sign Up

Variance, Self-Consistency, and Arbitrariness in Fair Classification

by   A. Feder Cooper, et al.

In fair classification, it is common to train a model, and to compare and correct subgroup-specific error rates for disparities. However, even if a model's classification decisions satisfy a fairness metric, it is not necessarily the case that these decisions are equally confident. This becomes clear if we measure variance: We can fix everything in the learning process except the subset of training data, train multiple models, measure (dis)agreement in predictions for each test example, and interpret disagreement to mean that the learning process is more unstable with respect to its classification decision. Empirically, some decisions can in fact be so unstable that they are effectively arbitrary. To reduce this arbitrariness, we formalize a notion of self-consistency of a learning process, develop an ensembling algorithm that provably increases self-consistency, and empirically demonstrate its utility to often improve both fairness and accuracy. Further, our evaluation reveals a startling observation: Applying ensembling to common fair classification benchmarks can significantly reduce subgroup error rate disparities, without employing common pre-, in-, or post-processing fairness interventions. Taken together, our results indicate that variance, particularly on small datasets, can muddle the reliability of conclusions about fairness. One solution is to develop larger benchmark tasks. To this end, we release a toolkit that makes the Home Mortgage Disclosure Act datasets easily usable for future research.


FETA: Fairness Enforced Verifying, Training, and Predicting Algorithms for Neural Networks

Algorithmic decision making driven by neural networks has become very pr...

Ensuring Fairness Beyond the Training Data

We initiate the study of fair classifiers that are robust to perturbatio...

FIFA: Making Fairness More Generalizable in Classifiers Trained on Imbalanced Data

Algorithmic fairness plays an important role in machine learning and imp...

Fair and Useful Cohort Selection

As important decisions about the distribution of society's resources bec...

On Fairness and Stability: Is Estimator Variance a Friend or a Foe?

The error of an estimator can be decomposed into a (statistical) bias te...

Fair Enough: Searching for Sufficient Measures of Fairness

Testing machine learning software for ethical bias has become a pressing...

A Resolution in Algorithmic Fairness: Calibrated Scores for Fair Classifications

Calibration and equal error rates are fundamental conditions for algorit...