Differentially Private Stochastic Coordinate Descent

06/12/2020
by   Georgios Damaskinos, et al.
0

In this paper we tackle the challenge of making the stochastic coordinate descent algorithm differentially private. Compared to the classical gradient descent algorithm where updates operate on a single model vector and controlled noise addition to this vector suffices to hide critical information about individuals, stochastic coordinate descent crucially relies on keeping auxiliary information in memory during training. This auxiliary information provides an additional privacy leak and poses the major challenge addressed in this work. Driven by the insight that under independent noise addition, the consistency of the auxiliary information holds in expectation, we present DP-SCD, the first differentially private stochastic coordinate descent algorithm. We give a convergence analysis of our new method, analyze its privacy-utility trade-off and demonstrate competitive performance against the popular stochastic gradient descent alternative while requiring significantly less tuning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/22/2021

Differentially Private Coordinate Descent for Composite Empirical Risk Minimization

Machine learning models can leak information about the data used to trai...
research
11/14/2022

SA-DPSGD: Differentially Private Stochastic Gradient Descent based on Simulated Annealing

Differential privacy (DP) provides a formal privacy guarantee that preve...
research
07/04/2022

High-Dimensional Private Empirical Risk Minimization by Greedy Coordinate Descent

In this paper, we study differentially private empirical risk minimizati...
research
10/15/2019

DP-MAC: The Differentially Private Method of Auxiliary Coordinates for Deep Learning

Developing a differentially private deep learning algorithm is challengi...
research
02/06/2023

Private GANs, Revisited

We show that the canonical approach for training differentially private ...
research
11/25/2021

DP-SEP! Differentially Private Stochastic Expectation Propagation

We are interested in privatizing an approximate posterior inference algo...
research
05/23/2017

Fast and Differentially Private Algorithms for Decentralized Collaborative Machine Learning

Consider a set of agents in a peer-to-peer communication network, where ...

Please sign up or login with your details

Forgot password? Click here to reset