Does mitigating ML's disparate impact require disparate treatment?

11/19/2017
by   Zachary C Lipton, et al.
0

Following related work in law and policy, two notions of prejudice have come to shape the study of fairness in algorithmic decision-making. Algorithms exhibit disparate treatment if they formally treat people differently according to a protected characteristic, like race, or if they intentionally discriminate (even if via proxy variables). Algorithms exhibit disparate impact if they affect subgroups differently. Disparate impact can arise unintentionally and absent disparate treatment. The natural way to reduce disparate impact would be to apply disparate treatment in favor of the disadvantaged group, i.e. to apply affirmative action. However, owing to the practice's contested legal status, several papers have proposed trying to eliminate both forms of unfairness simultaneously, introducing a family of algorithms that we denote disparate learning processes (DLPs). These processes incorporate the protected characteristic as an input to the learning algorithm (e.g. via a regularizer) but produce a model that cannot directly access the protected characteristic as an input. In this paper, we make the following arguments: (i) DLPs can be functionally equivalent to disparate treatment, and thus should carry the same legal status; (ii) when the protected characteristic is redundantly encoded in the nonsensitive features, DLPs can exactly apply any disparate treatment protocol; (iii) when the characteristic is only partially encoded, DLPs may induce within-class discrimination. Finally, we argue the normative point that rather than masking efforts towards proportional representation, it is preferable to undertake them transparently.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/12/2023

Multi-dimensional discrimination in Law and Machine Learning – A comparative overview

AI-driven decision-making can lead to discrimination against certain ind...
research
12/11/2014

Certifying and removing disparate impact

What does it mean for an algorithm to be biased? In U.S. law, unintentio...
research
05/02/2022

The Theory of Artificial Immutability: Protecting Algorithmic Groups Under Anti-Discrimination Law

Artificial Intelligence (AI) is increasingly used to make important deci...
research
08/01/2023

Fair Models in Credit: Intersectional Discrimination and the Amplification of Inequity

The increasing usage of new data sources and machine learning (ML) techn...
research
06/30/2021

Unaware Fairness: Hierarchical Random Forest for Protected Classes

Procedural fairness has been a public concern, which leads to controvers...
research
10/16/2018

Hunting for Discriminatory Proxies in Linear Regression Models

A machine learning model may exhibit discrimination when used to make de...
research
06/01/2019

Assessing Algorithmic Fairness with Unobserved Protected Class Using Data Combination

The increasing impact of algorithmic decisions on people's lives compels...

Please sign up or login with your details

Forgot password? Click here to reset