An Empirical Analysis of Fairness Notions under Differential Privacy

Recent works have shown that selecting an optimal model architecture suited to the differential privacy setting is necessary to achieve the best possible utility for a given privacy budget using differentially private stochastic gradient descent (DP-SGD)(Tramer and Boneh 2020; Cheng et al. 2022). In light of these findings, we empirically analyse how different fairness notions, belonging to distinct classes of statistical fairness criteria (independence, separation and sufficiency), are impacted when one selects a model architecture suitable for DP-SGD, optimized for utility. Using standard datasets from ML fairness literature, we show using a rigorous experimental protocol, that by selecting the optimal model architecture for DP-SGD, the differences across groups concerning the relevant fairness metrics (demographic parity, equalized odds and predictive parity) more often decrease or are negligibly impacted, compared to the non-private baseline, for which optimal model architecture has also been selected to maximize utility. These findings challenge the understanding that differential privacy will necessarily exacerbate unfairness in deep learning models trained on biased datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/22/2021

DP-SGD vs PATE: Which Has Less Disparate Impact on Model Accuracy?

Recent advances in differentially private deep learning have demonstrate...
research
05/24/2023

Personalized DP-SGD using Sampling Mechanisms

Personalized privacy becomes critical in deep learning for Trustworthy A...
research
06/28/2019

DP-LSSGD: A Stochastic Optimization Method to Lift the Utility in Privacy-Preserving ERM

Machine learning (ML) models trained by differentially private stochasti...
research
03/16/2023

Fairness-aware Differentially Private Collaborative Filtering

Recently, there has been an increasing adoption of differential privacy ...
research
11/17/2021

Network Generation with Differential Privacy

We consider the problem of generating private synthetic versions of real...
research
05/09/2022

SmoothNets: Optimizing CNN architecture design for differentially private deep learning

The arguably most widely employed algorithm to train deep neural network...
research
02/24/2022

Exploring the Unfairness of DP-SGD Across Settings

End users and regulators require private and fair artificial intelligenc...

Please sign up or login with your details

Forgot password? Click here to reset