iFair: Learning Individually Fair Data Representations for Algorithmic Decision Making

06/04/2018
by   Preethi Lahoti, et al.
0

People are rated and ranked, towards algorithmic decision making in an increasing number of applications, typically based on machine learning. Research on how to incorporate fairness into such tasks has prevalently pursued the paradigm of group fairness: ensuring that each ethnic or social group receives its fair share in the outcome of classifiers and rankings. In contrast, the alternative paradigm of individual fairness has received relatively little attention. This paper introduces a method for probabilistically clustering user records into a low-rank representation that captures individual fairness yet also achieves high accuracy in classification and regression models. Our notion of individual fairness requires that users who are similar in all task-relevant attributes such as job qualification, and disregarding all potentially discriminating attributes such as gender, should have similar outcomes. Since the case for fairness is ubiquitous across many tasks, we aim to learn general representations that can be applied to arbitrary downstream use-cases. We demonstrate the versatility of our method by applying it to classification and learning-to-rank tasks on two real-world datasets. Our experiments show substantial improvements over the best prior work for this setting.

READ FULL TEXT
research
07/03/2017

Fair Pipelines

This work facilitates ensuring fairness of machine learning in the real ...
research
09/03/2019

Avoiding Resentment Via Monotonic Fairness

Classifiers that achieve demographic balance by explicitly using protect...
research
10/12/2020

Bridging Machine Learning and Mechanism Design towards Algorithmic Fairness

Decision-making systems increasingly orchestrate our world: how to inter...
research
07/02/2018

A Unified Approach to Quantifying Algorithmic Unfairness: Measuring Individual & Group Unfairness via Inequality Indices

Discrimination via algorithmic decision making has received considerable...
research
08/30/2022

RAGUEL: Recourse-Aware Group Unfairness Elimination

While machine learning and ranking-based systems are in widespread use f...
research
03/15/2023

DualFair: Fair Representation Learning at Both Group and Individual Levels via Contrastive Self-supervision

Algorithmic fairness has become an important machine learning problem, e...
research
06/02/2023

The Flawed Foundations of Fair Machine Learning

The definition and implementation of fairness in automated decisions has...

Please sign up or login with your details

Forgot password? Click here to reset