Disparate Impact in Differential Privacy from Gradient Misalignment

06/15/2022
by   Maria S. Esipova, et al.
0

As machine learning becomes more widespread throughout society, aspects including data privacy and fairness must be carefully considered, and are crucial for deployment in highly regulated industries. Unfortunately, the application of privacy enhancing technologies can worsen unfair tendencies in models. In particular, one of the most widely used techniques for private model training, differentially private stochastic gradient descent (DPSGD), frequently intensifies disparate impact on groups within data. In this work we study the fine-grained causes of unfairness in DPSGD and identify gradient misalignment due to inequitable gradient clipping as the most significant source. This observation leads us to a new method for reducing unfairness by preventing gradient misalignment in DPSGD.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/08/2020

Removing Disparate Impact of Differentially Private Stochastic Gradient Descent on Model Accuracy

When we enforce differential privacy in machine learning, the utility-pr...
research
11/14/2022

SA-DPSGD: Differentially Private Stochastic Gradient Descent based on Simulated Annealing

Differential privacy (DP) provides a formal privacy guarantee that preve...
research
02/05/2022

Differentially Private Graph Classification with GNNs

Graph Neural Networks (GNNs) have established themselves as the state-of...
research
06/14/2023

Augment then Smooth: Reconciling Differential Privacy with Certified Robustness

Machine learning models are susceptible to a variety of attacks that can...
research
05/24/2023

Flocks of Stochastic Parrots: Differentially Private Prompt Learning for Large Language Models

Large language models (LLMs) are excellent in-context learners. However,...
research
05/04/2023

Leveraging gradient-derived metrics for data selection and valuation in differentially private training

Obtaining high-quality data for collaborative training of machine learni...
research
06/24/2019

The Value of Collaboration in Convex Machine Learning with Differential Privacy

In this paper, we apply machine learning to distributed private data own...

Please sign up or login with your details

Forgot password? Click here to reset