On the Privacy Risks of Algorithmic Fairness

by   Hongyan Chang, et al.

Algorithmic fairness and privacy are essential elements of trustworthy machine learning for critical decision making processes. Fair machine learning algorithms are developed to minimize discrimination against protected groups in machine learning. This is achieved, for example, by imposing a constraint on the model to equalize its behavior across different groups. This can significantly increase the influence of some training data points on the fair model. We study how this change in influence can change the information leakage of the model about its training data. We analyze the privacy risks of statistical notions of fairness (i.e., equalized odds) through the lens of membership inference attacks: inferring whether a data point was used for training a model. We show that fairness comes at the cost of privacy. However, this privacy cost is not distributed equally: the information leakage of fair models increases significantly on the unprivileged subgroups, which suffer from the discrimination in regular models. Furthermore, the more biased the underlying data is, the higher the privacy cost of achieving fairness for the unprivileged subgroups is. We demonstrate this effect on multiple datasets and explain how fairness-aware learning impacts privacy.



There are no comments yet.


page 8


On Adversarial Bias and the Robustness of Fair Machine Learning

Optimizing prediction accuracy can come at the expense of fairness. Towa...

Achieving Fairness at No Utility Cost via Data Reweighing

With the fast development of algorithmic governance, fairness has become...

DIVINE: Diverse Influential Training Points for Data Visualization and Model Refinement

As the complexity of machine learning (ML) models increases, resulting i...

Learning to Succeed while Teaching to Fail: Privacy in Closed Machine Learning Systems

Security, privacy, and fairness have become critical in the era of data ...

FairBatch: Batch Selection for Model Fairness

Training a fair machine learning model is essential to prevent demograph...

Fairness constraint in Structural Econometrics and Application to fair estimation using Instrumental Variables

A supervised machine learning algorithm determines a model from a learni...

Leave-one-out Unfairness

We introduce leave-one-out unfairness, which characterizes how likely a ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.