Beyond traditional assumptions in fair machine learning

01/29/2021
by   Niki Kilbertus, et al.
0

This thesis scrutinizes common assumptions underlying traditional machine learning approaches to fairness in consequential decision making. After challenging the validity of these assumptions in real-world applications, we propose ways to move forward when they are violated. First, we show that group fairness criteria purely based on statistical properties of observed data are fundamentally limited. Revisiting this limitation from a causal viewpoint we develop a more versatile conceptual framework, causal fairness criteria, and first algorithms to achieve them. We also provide tools to analyze how sensitive a believed-to-be causally fair algorithm is to misspecifications of the causal graph. Second, we overcome the assumption that sensitive data is readily available in practice. To this end we devise protocols based on secure multi-party computation to train, validate, and contest fair decision algorithms without requiring users to disclose their sensitive data or decision makers to disclose their models. Finally, we also accommodate the fact that outcome labels are often only observed when a certain decision has been made. We suggest a paradigm shift away from training predictive models towards directly learning decisions to relax the traditional assumption that labels can always be recorded. The main contribution of this thesis is the development of theoretically substantiated and practically feasible methods to move research on fair machine learning closer to real-world applications.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/02/2022

Maintaining fairness across distribution shift: do we have viable solutions for real-world applications?

Fairness and robustness are often considered as orthogonal dimensions wh...
research
02/25/2022

On Learning and Testing of Counterfactual Fairness through Data Preprocessing

Machine learning has become more important in real-life decision-making ...
research
11/17/2021

CONFAIR: Configurable and Interpretable Algorithmic Fairness

The rapid growth of data in the recent years has led to the development ...
research
07/23/2022

Causal Fairness Analysis

Decision-making systems based on AI and machine learning have been used ...
research
12/21/2020

The Importance of Modeling Data Missingness in Algorithmic Fairness: A Causal Perspective

Training datasets for machine learning often have some form of missingne...
research
11/29/2021

Learning Fair Classifiers with Partially Annotated Group Labels

Recently, fairness-aware learning have become increasingly crucial, but ...
research
06/02/2023

The Flawed Foundations of Fair Machine Learning

The definition and implementation of fairness in automated decisions has...

Please sign up or login with your details

Forgot password? Click here to reset