On Adversarial Bias and the Robustness of Fair Machine Learning

by   Hongyan Chang, et al.

Optimizing prediction accuracy can come at the expense of fairness. Towards minimizing discrimination against a group, fair machine learning algorithms strive to equalize the behavior of a model across different groups, by imposing a fairness constraint on models. However, we show that giving the same importance to groups of different sizes and distributions, to counteract the effect of bias in training data, can be in conflict with robustness. We analyze data poisoning attacks against group-based fair machine learning, with the focus on equalized odds. An adversary who can control sampling or labeling for a fraction of training data, can reduce the test accuracy significantly beyond what he can achieve on unconstrained models. Adversarial sampling and adversarial labeling attacks can also worsen the model's fairness gap on test data, even though the model satisfies the fairness constraint on training data. We analyze the robustness of fair machine learning through an empirical evaluation of attacks on multiple algorithms and benchmark datasets.



There are no comments yet.


page 34

page 35


Poisoning Attacks on Fair Machine Learning

Both fair machine learning and adversarial learning have been extensivel...

On the Privacy Risks of Algorithmic Fairness

Algorithmic fairness and privacy are essential elements of trustworthy m...

Subverting Fair Image Search with Generative Adversarial Perturbations

In this work we explore the intersection fairness and robustness in the ...

Fairness-Aware Learning from Corrupted Data

Addressing fairness concerns about machine learning models is a crucial ...

Fairness constraint in Structural Econometrics and Application to fair estimation using Instrumental Variables

A supervised machine learning algorithm determines a model from a learni...

Fairness in Forecasting and Learning Linear Dynamical Systems

As machine learning becomes more pervasive, the urgency of assuring its ...

FaiR-N: Fair and Robust Neural Networks for Structured Data

Fairness in machine learning is crucial when individuals are subject to ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.