On the Privacy Risks of Algorithmic Fairness

11/07/2020
by   Hongyan Chang, et al.
21

Algorithmic fairness and privacy are essential elements of trustworthy machine learning for critical decision making processes. Fair machine learning algorithms are developed to minimize discrimination against protected groups in machine learning. This is achieved, for example, by imposing a constraint on the model to equalize its behavior across different groups. This can significantly increase the influence of some training data points on the fair model. We study how this change in influence can change the information leakage of the model about its training data. We analyze the privacy risks of statistical notions of fairness (i.e., equalized odds) through the lens of membership inference attacks: inferring whether a data point was used for training a model. We show that fairness comes at the cost of privacy. However, this privacy cost is not distributed equally: the information leakage of fair models increases significantly on the unprivileged subgroups, which suffer from the discrimination in regular models. Furthermore, the more biased the underlying data is, the higher the privacy cost of achieving fairness for the unprivileged subgroups is. We demonstrate this effect on multiple datasets and explain how fairness-aware learning impacts privacy.

READ FULL TEXT
research
06/15/2020

On Adversarial Bias and the Robustness of Fair Machine Learning

Optimizing prediction accuracy can come at the expense of fairness. Towa...
research
02/01/2022

Achieving Fairness at No Utility Cost via Data Reweighing

With the fast development of algorithmic governance, fairness has become...
research
09/14/2022

Data Privacy and Trustworthy Machine Learning

The privacy risks of machine learning models is a major concern when tra...
research
05/23/2017

Learning to Succeed while Teaching to Fail: Privacy in Closed Machine Learning Systems

Security, privacy, and fairness have become critical in the era of data ...
research
12/03/2020

FairBatch: Batch Selection for Model Fairness

Training a fair machine learning model is essential to prevent demograph...
research
02/16/2022

Fairness constraint in Structural Econometrics and Application to fair estimation using Instrumental Variables

A supervised machine learning algorithm determines a model from a learni...
research
07/13/2021

DIVINE: Diverse Influential Training Points for Data Visualization and Model Refinement

As the complexity of machine learning (ML) models increases, resulting i...

Please sign up or login with your details

Forgot password? Click here to reset