DeepAI AI Chat
Log In Sign Up

Differentially Private Accelerated Optimization Algorithms

by   Nurdan Kuru, et al.
Erasmus University Rotterdam
Rutgers University
Sabancı University

We present two classes of differentially private optimization algorithms derived from the well-known accelerated first-order methods. The first algorithm is inspired by Polyak's heavy ball method and employs a smoothing approach to decrease the accumulated noise on the gradient steps required for differential privacy. The second class of algorithms are based on Nesterov's accelerated gradient method and its recent multi-stage variant. We propose a noise dividing mechanism for the iterations of Nesterov's method in order to improve the error behavior of the algorithm. The convergence rate analyses are provided for both the heavy ball and the Nesterov's accelerated gradient method with the help of the dynamical system analysis techniques. Finally, we conclude with our numerical experiments showing that the presented algorithms have advantages over the well-known differentially private algorithms.


page 1

page 2

page 3

page 4


Differentially Private Optimization for Smooth Nonconvex ERM

We develop simple differentially private optimization algorithms that mo...

Adaptive Differentially Private Empirical Risk Minimization

We propose an adaptive (stochastic) gradient perturbation method for dif...

Fast Differentially Private Matrix Factorization

Differentially private collaborative filtering is a challenging task, bo...

On Acceleration with Noise-Corrupted Gradients

Accelerated algorithms have broad applications in large-scale optimizati...

On Dynamic Noise Influence in Differentially Private Learning

Protecting privacy in learning while maintaining the model performance h...

Provably Accelerated Randomized Gossip Algorithms

In this work we present novel provably accelerated gossip algorithms for...