How Robust is your Fair Model? Exploring the Robustness of Diverse Fairness Strategies

07/11/2022
by   Edward Small, et al.
30

With the introduction of machine learning in high-stakes decision making, ensuring algorithmic fairness has become an increasingly important problem to solve. In response to this, many mathematical definitions of fairness have been proposed, and a variety of optimisation techniques have been developed, all designed to maximise a defined notion of fairness. However, fair solutions are reliant on the quality of the training data, and can be highly sensitive to noise. Recent studies have shown that robustness (the ability for a model to perform well on unseen data) plays a significant role in the type of strategy that should be used when approaching a new problem and, hence, measuring the robustness of these strategies has become a fundamental problem. In this work, we therefore propose a new criterion to measure the robustness of various fairness optimisation strategies - the robustness ratio. We conduct multiple extensive experiments on five bench mark fairness data sets using three of the most popular fairness strategies with respect to four of the most popular definitions of fairness. Our experiments empirically show that fairness methods that rely on threshold optimisation are very sensitive to noise in all the evaluated data sets, despite mostly outperforming other methods. This is in contrast to the other two methods, which are less fair for low noise scenarios but fairer for high noise ones. To the best of our knowledge, we are the first to quantitatively evaluate the robustness of fairness optimisation strategies. This can potentially can serve as a guideline in choosing the most suitable fairness strategy for various data sets.

READ FULL TEXT
research
11/25/2018

50 Years of Test (Un)fairness: Lessons for Machine Learning

Quantitative definitions of what is unfair and what is fair have been in...
research
10/18/2021

Fair Tree Learning

When dealing with sensitive data in automated data-driven decision-makin...
research
07/17/2023

Fairness in KI-Systemen

The more AI-assisted decisions affect people's lives, the more important...
research
03/06/2018

A Reductions Approach to Fair Classification

We present a systematic approach for achieving fairness in a binary clas...
research
05/18/2021

Achieving Fairness with a Simple Ridge Penalty

Estimating a fair linear regression model subject to a user-defined leve...
research
01/24/2019

Fairness risk measures

Ensuring that classifiers are non-discriminatory or fair with respect to...
research
06/02/2023

Influence Maximization with Fairness at Scale (Extended Version)

In this paper, we revisit the problem of influence maximization with fai...

Please sign up or login with your details

Forgot password? Click here to reset