The Unbearable Weight of Massive Privilege: Revisiting Bias-Variance Trade-Offs in the Context of Fair Prediction

02/17/2023
by   Falaah Arif Khan, et al.
0

In this paper we revisit the bias-variance decomposition of model error from the perspective of designing a fair classifier: we are motivated by the widely held socio-technical belief that noise variance in large datasets in social domains tracks demographic characteristics such as gender, race, disability, etc. We propose a conditional-iid (ciid) model built from group-specific classifiers that seeks to improve on the trade-offs made by a single model (iid setting). We theoretically analyze the bias-variance decomposition of different models in the Gaussian Mixture Model, and then empirically test our setup on the COMPAS and folktables datasets. We instantiate the ciid model with two procedures that improve "fairness" by conditioning out undesirable effects: first, by conditioning directly on sensitive attributes, and second, by clustering samples into groups and conditioning on cluster membership (blind to protected group membership). Our analysis suggests that there might be principled procedures and concrete real-world use cases under which conditional models are preferred, and our striking empirical results strongly indicate that non-iid settings, such as the ciid setting proposed here, might be more suitable for big data applications in social contexts.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/13/2022

Understanding Unfairness in Fraud Detection through Model and Data Bias Interactions

In recent years, machine learning algorithms have become ubiquitous in a...
research
06/23/2021

Fairness for Image Generation with Uncertain Sensitive Attributes

This work tackles the issue of fairness in the context of generative pro...
research
02/24/2022

Trade-offs between Group Fairness Metrics in Societal Resource Allocation

We consider social resource allocations that deliver an array of scarce ...
research
02/09/2023

On Fairness and Stability: Is Estimator Variance a Friend or a Foe?

The error of an estimator can be decomposed into a (statistical) bias te...
research
09/17/2019

A Distributed Fair Machine Learning Framework with Private Demographic Data Protection

Fair machine learning has become a significant research topic with broad...
research
06/04/2022

When Personalization Harms: Reconsidering the Use of Group Attributes in Prediction

The standard approach to personalization in machine learning consists of...

Please sign up or login with your details

Forgot password? Click here to reset