Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness

11/14/2017
by   Michael Kearns, et al.
0

The most prevalent notions of fairness in machine learning are statistical definitions: they fix a small collection of pre-defined groups, and then ask for parity of some statistic of the classifier across these groups. Constraints of this form are susceptible to (intentional or inadvertent) "fairness gerrymandering", in which a classifier appears to be fair on each individual group, but badly violates the fairness constraint on one or more structured subgroups defined over the protected attributes.. We propose instead to demand statistical notions of fairness across exponentially (or infinitely) many subgroups, defined by a structured class of functions over the protected attributes. This interpolates between statistical definitions of fairness, and recently proposed individual notions of fairness, but it raises several computational challenges. It is no longer clear how to even audit a fixed classifier to see if it satisfies such a strong definition of fairness. We prove that the computational problem of auditing subgroup fairness for both equality of false positive rates and statistical parity is equivalent to the problem of weak agnostic learning --- which means it is computationally hard in the worst case, even for simple structured subclasses. However, it also suggests that common heuristics for learning can be applied to successfully solve the auditing problem in practice. We then derive an algorithm that provably converges to the best fair distribution over classifiers in a given class, given access to oracles which can solve the agnostic learning and auditing problems. The algorithm is based on a formulation of subgroup fairness as fictitious play in a two-player zero-sum game between a Learner and an Auditor. We implement our algorithm using linear regression as a heuristic oracle, and show that we can effectively both audit and learn fair classifiers on real datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/24/2018

An Empirical Study of Rich Subgroup Fairness for Machine Learning

Kearns et al. [2018] recently proposed a notion of rich subgroup fairnes...
research
06/12/2019

Pairwise Fairness for Ranking and Regression

We present pairwise metrics of fairness for ranking and regression model...
research
09/09/2020

Addressing Fairness in Classification with a Model-Agnostic Multi-Objective Algorithm

The goal of fairness in classification is to learn a classifier that doe...
research
05/26/2020

Review of Mathematical frameworks for Fairness in Machine Learning

A review of the main fairness definitions and fair learning methodologie...
research
04/04/2020

Abstracting Fairness: Oracles, Metrics, and Interpretability

It is well understood that classification algorithms, for example, for d...
research
05/25/2019

Average Individual Fairness: Algorithms, Generalization and Experiments

We propose a new family of fairness definitions for classification probl...
research
06/22/2022

Input-agnostic Certified Group Fairness via Gaussian Parameter Smoothing

Only recently, researchers attempt to provide classification algorithms ...

Please sign up or login with your details

Forgot password? Click here to reset