Fairness-Aware Learning from Corrupted Data

02/11/2021
by   Nikola Konstantinov, et al.
0

Addressing fairness concerns about machine learning models is a crucial step towards their long-term adoption in real-world automated systems. While many approaches have been developed for training fair models from data, little is known about the effects of data corruption on these methods. In this work we consider fairness-aware learning under arbitrary data manipulations. We show that an adversary can force any learner to return a biased classifier, with or without degrading accuracy, and that the strength of this bias increases for learning problems with underrepresented protected groups in the data. We also provide upper bounds that match these hardness results up to constant factors, by proving that two natural learning algorithms achieve order-optimal guarantees in terms of both accuracy and fairness under adversarial data manipulations.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

06/15/2020

On Adversarial Bias and the Robustness of Fair Machine Learning

Optimizing prediction accuracy can come at the expense of fairness. Towa...
04/12/2022

Breaking Fair Binary Classification with Optimal Flipping Attacks

Minimizing risk with fairness constraints is one of the popular approach...
10/01/2021

A survey on datasets for fairness-aware machine learning

As decision-making increasingly relies on machine learning and (big) dat...
10/08/2020

Metrics and methods for a systematic comparison of fairness-aware machine learning algorithms

Understanding and removing bias from the decisions made by machine learn...
06/12/2020

Fairness in Forecasting and Learning Linear Dynamical Systems

As machine learning becomes more pervasive, the urgency of assuring its ...
10/25/2021

DECAF: Generating Fair Synthetic Data Using Causally-Aware Generative Networks

Machine learning models have been criticized for reflecting unfair biase...
12/01/2020

Data Preprocessing to Mitigate Bias with Boosted Fair Mollifiers

In a recent paper, Celis et al. (2020) introduced a new approach to fair...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.