Exploring the Unfairness of DP-SGD Across Settings

02/24/2022
by   Frederik Noe, et al.
0

End users and regulators require private and fair artificial intelligence models, but previous work suggests these objectives may be at odds. We use the CivilComments to evaluate the impact of applying the de facto standard approach to privacy, DP-SGD, across several fairness metrics. We evaluate three implementations of DP-SGD: for dimensionality reduction (PCA), linear classification (logistic regression), and robust deep learning (Group-DRO). We establish a negative, logarithmic correlation between privacy and fairness in the case of linear classification and robust deep learning. DP-SGD had no significant impact on fairness for PCA, but upon inspection, also did not seem to lead to private representations.

READ FULL TEXT
research
06/22/2021

DP-SGD vs PATE: Which Has Less Disparate Impact on Model Accuracy?

Recent advances in differentially private deep learning have demonstrate...
research
05/24/2023

Personalized DP-SGD using Sampling Mechanisms

Personalized privacy becomes critical in deep learning for Trustworthy A...
research
06/14/2022

Automatic Clipping: Differentially Private Deep Learning Made Easier and Stronger

Per-example gradient clipping is a key algorithmic step that enables pra...
research
10/07/2021

Complex-valued deep learning with differential privacy

We present ζ-DP, an extension of differential privacy (DP) to complex-va...
research
12/12/2022

Generalizing DP-SGD with Shuffling and Batching Clipping

Classical differential private DP-SGD implements individual clipping wit...
research
02/06/2023

An Empirical Analysis of Fairness Notions under Differential Privacy

Recent works have shown that selecting an optimal model architecture sui...
research
11/26/2021

DP-SGD vs PATE: Which Has Less Disparate Impact on GANs?

Generative Adversarial Networks (GANs) are among the most popular approa...

Please sign up or login with your details

Forgot password? Click here to reset