A continuous framework for fairness

12/21/2017
by   Philipp Hacker, et al.
0

Increasingly, discrimination by algorithms is perceived as a societal and legal problem. As a response, a number of criteria for implementing algorithmic fairness in machine learning have been developed in the literature. This paper proposes the Continuous Fairness Algorithm (CFAθ) which enables a continuous interpolation between different fairness definitions. More specifically, we make three main contributions to the existing literature. First, our approach allows the decision maker to continuously vary between concepts of individual and group fairness. As a consequence, the algorithm enables the decision maker to adopt intermediate "worldviews" on the degree of discrimination encoded in algorithmic processes, adding nuance to the extreme cases of "we're all equal" (WAE) and "what you see is what you get" (WYSIWYG) proposed so far in the literature. Second, we use optimal transport theory, and specifically the concept of the barycenter, to maximize decision maker utility under the chosen fairness constraints. Third, the algorithm is able to handle cases of intersectionality, i.e., of multi-dimensional discrimination of certain groups on grounds of several criteria. We discuss three main examples (college admissions; credit application; insurance contracts) and map out the policy implications of our approach. The explicit formalization of the trade-off between individual and group fairness allows this post-processing approach to be tailored to different situational contexts in which one or the other fairness criterion may take precedence.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/15/2020

Towards a Flexible Framework for Algorithmic Fairness

Increasingly, scholars seek to integrate legal and technological insight...
research
02/20/2021

Everything is Relative: Understanding Fairness with Optimal Transport

To study discrimination in automated decision-making systems, scholars h...
research
12/18/2020

Affirmative Algorithms: The Legal Grounds for Fairness as Awareness

While there has been a flurry of research in algorithmic fairness, what ...
research
06/21/2019

FlipTest: Fairness Auditing via Optimal Transport

We present FlipTest, a black-box auditing technique for uncovering subgr...
research
06/08/2018

Obtaining fairness using optimal transport theory

Statistical algorithms are usually helping in making decisions in many a...
research
12/01/2022

Beyond Incompatibility: Trade-offs between Mutually Exclusive Fairness Criteria in Machine Learning and Law

Trustworthy AI is becoming ever more important in both machine learning ...
research
07/02/2018

A Unified Approach to Quantifying Algorithmic Unfairness: Measuring Individual & Group Unfairness via Inequality Indices

Discrimination via algorithmic decision making has received considerable...

Please sign up or login with your details

Forgot password? Click here to reset