Fairness-Aware Learning from Corrupted Data

02/11/2021
by   Nikola Konstantinov, et al.
0

Addressing fairness concerns about machine learning models is a crucial step towards their long-term adoption in real-world automated systems. While many approaches have been developed for training fair models from data, little is known about the effects of data corruption on these methods. In this work we consider fairness-aware learning under arbitrary data manipulations. We show that an adversary can force any learner to return a biased classifier, with or without degrading accuracy, and that the strength of this bias increases for learning problems with underrepresented protected groups in the data. We also provide upper bounds that match these hardness results up to constant factors, by proving that two natural learning algorithms achieve order-optimal guarantees in terms of both accuracy and fairness under adversarial data manipulations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/24/2022

Enforcing Delayed-Impact Fairness Guarantees

Recent research has shown that seemingly fair machine learning models, w...
research
06/15/2020

On Adversarial Bias and the Robustness of Fair Machine Learning

Optimizing prediction accuracy can come at the expense of fairness. Towa...
research
04/12/2022

Breaking Fair Binary Classification with Optimal Flipping Attacks

Minimizing risk with fairness constraints is one of the popular approach...
research
06/12/2020

Fairness in Forecasting and Learning Linear Dynamical Systems

As machine learning becomes more pervasive, the urgency of assuring its ...
research
09/12/2022

Fairness in Forecasting of Observations of Linear Dynamical Systems

In machine learning, training data often capture the behaviour of multip...
research
11/04/2022

Fairness-aware Regression Robust to Adversarial Attacks

In this paper, we take a first step towards answering the question of ho...
research
10/13/2022

Equal Improvability: A New Fairness Notion Considering the Long-term Impact

Devising a fair classifier that does not discriminate against different ...

Please sign up or login with your details

Forgot password? Click here to reset