Fairness-Aware Learning from Corrupted Data

by   Nikola Konstantinov, et al.

Addressing fairness concerns about machine learning models is a crucial step towards their long-term adoption in real-world automated systems. While many approaches have been developed for training fair models from data, little is known about the effects of data corruption on these methods. In this work we consider fairness-aware learning under arbitrary data manipulations. We show that an adversary can force any learner to return a biased classifier, with or without degrading accuracy, and that the strength of this bias increases for learning problems with underrepresented protected groups in the data. We also provide upper bounds that match these hardness results up to constant factors, by proving that two natural learning algorithms achieve order-optimal guarantees in terms of both accuracy and fairness under adversarial data manipulations.



There are no comments yet.


page 1

page 2

page 3

page 4


On Adversarial Bias and the Robustness of Fair Machine Learning

Optimizing prediction accuracy can come at the expense of fairness. Towa...

Breaking Fair Binary Classification with Optimal Flipping Attacks

Minimizing risk with fairness constraints is one of the popular approach...

A survey on datasets for fairness-aware machine learning

As decision-making increasingly relies on machine learning and (big) dat...

Metrics and methods for a systematic comparison of fairness-aware machine learning algorithms

Understanding and removing bias from the decisions made by machine learn...

Fairness in Forecasting and Learning Linear Dynamical Systems

As machine learning becomes more pervasive, the urgency of assuring its ...

DECAF: Generating Fair Synthetic Data Using Causally-Aware Generative Networks

Machine learning models have been criticized for reflecting unfair biase...

Data Preprocessing to Mitigate Bias with Boosted Fair Mollifiers

In a recent paper, Celis et al. (2020) introduced a new approach to fair...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.