On Adversarial Bias and the Robustness of Fair Machine Learning

06/15/2020
by   Hongyan Chang, et al.
0

Optimizing prediction accuracy can come at the expense of fairness. Towards minimizing discrimination against a group, fair machine learning algorithms strive to equalize the behavior of a model across different groups, by imposing a fairness constraint on models. However, we show that giving the same importance to groups of different sizes and distributions, to counteract the effect of bias in training data, can be in conflict with robustness. We analyze data poisoning attacks against group-based fair machine learning, with the focus on equalized odds. An adversary who can control sampling or labeling for a fraction of training data, can reduce the test accuracy significantly beyond what he can achieve on unconstrained models. Adversarial sampling and adversarial labeling attacks can also worsen the model's fairness gap on test data, even though the model satisfies the fairness constraint on training data. We analyze the robustness of fair machine learning through an empirical evaluation of attacks on multiple algorithms and benchmark datasets.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 34

page 35

10/17/2021

Poisoning Attacks on Fair Machine Learning

Both fair machine learning and adversarial learning have been extensivel...
11/07/2020

On the Privacy Risks of Algorithmic Fairness

Algorithmic fairness and privacy are essential elements of trustworthy m...
05/05/2022

Subverting Fair Image Search with Generative Adversarial Perturbations

In this work we explore the intersection fairness and robustness in the ...
02/11/2021

Fairness-Aware Learning from Corrupted Data

Addressing fairness concerns about machine learning models is a crucial ...
02/16/2022

Fairness constraint in Structural Econometrics and Application to fair estimation using Instrumental Variables

A supervised machine learning algorithm determines a model from a learni...
06/12/2020

Fairness in Forecasting and Learning Linear Dynamical Systems

As machine learning becomes more pervasive, the urgency of assuring its ...
10/13/2020

FaiR-N: Fair and Robust Neural Networks for Structured Data

Fairness in machine learning is crucial when individuals are subject to ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.