Individual Fairness in Pipelines

04/12/2020
by   Cynthia Dwork, et al.
5

It is well understood that a system built from individually fair components may not itself be individually fair. In this work, we investigate individual fairness under pipeline composition. Pipelines differ from ordinary sequential or repeated composition in that individuals may drop out at any stage, and classification in subsequent stages may depend on the remaining "cohort" of individuals. As an example, a company might hire a team for a new project and at a later point promote the highest performer on the team. Unlike other repeated classification settings, where the degree of unfairness degrades gracefully over multiple fair steps, the degree of unfairness in pipelines can be arbitrary, even in a pipeline with just two stages. Guided by a panoply of real-world examples, we provide a rigorous framework for evaluating different types of fairness guarantees for pipelines. We show that naïve auditing is unable to uncover systematic unfairness and that, in order to ensure fairness, some form of dependence must exist between the design of algorithms at different stages in the pipeline. Finally, we provide constructions that permit flexibility at later stages, meaning that there is no need to lock in the entire pipeline at the time that the early stage is constructed.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/03/2017

Fair Pipelines

This work facilitates ensuring fairness of machine learning in the real ...
research
11/26/2021

Latent Space Smoothing for Individually Fair Representations

Fair representation learning encodes user data to ensure fairness and ut...
research
06/08/2020

A Notion of Individual Fairness for Clustering

A common distinction in fair machine learning, in particular in fair cla...
research
12/10/2018

Temporal Aspects of Individual Fairness

The concept of individual fairness advocates similar treatment of simila...
research
04/21/2023

Individual Fairness in Bayesian Neural Networks

We study Individual Fairness (IF) for Bayesian neural networks (BNNs). S...
research
02/17/2020

Learning Individually Fair Classifier with Causal-Effect Constraint

Machine learning is increasingly being used in various applications that...
research
04/28/2022

Justice in Misinformation Detection Systems: An Analysis of Algorithms, Stakeholders, and Potential Harms

Faced with the scale and surge of misinformation on social media, many p...

Please sign up or login with your details

Forgot password? Click here to reset