DIFF2: Differential Private Optimization via Gradient Differences for Nonconvex Distributed Learning

02/08/2023
by   Tomoya Murata, et al.
0

Differential private optimization for nonconvex smooth objective is considered. In the previous work, the best known utility bound is O(√(d)/(nε_DP)) in terms of the squared full gradient norm, which is achieved by Differential Private Gradient Descent (DP-GD) as an instance, where n is the sample size, d is the problem dimensionality and ε_DP is the differential privacy parameter. To improve the best known utility bound, we propose a new differential private optimization framework called DIFF2 (DIFFerential private optimization via gradient DIFFerences) that constructs a differential private global gradient estimator with possibly quite small variance based on communicated gradient differences rather than gradients themselves. It is shown that DIFF2 with a gradient descent subroutine achieves the utility of O(d^2/3/(nε_DP)^4/3), which can be significantly better than the previous one in terms of the dependence on the sample size n. To the best of our knowledge, this is the first fundamental result to improve the standard utility O(√(d)/(nε_DP)) for nonconvex objectives. Additionally, a more computational and communication efficient subroutine is combined with DIFF2 and its theoretical analysis is also given. Numerical experiments are conducted to validate the superiority of DIFF2 framework.

READ FULL TEXT

page 8

page 13

research
01/22/2022

Differentially Private SGDA for Minimax Problems

Stochastic gradient descent ascent (SGDA) and its variants have been the...
research
03/29/2017

Efficient Private ERM for Smooth Objectives

In this paper, we consider efficient differentially private empirical ri...
research
10/30/2019

Efficient Privacy-Preserving Nonconvex Optimization

While many solutions for privacy-preserving convex empirical risk minimi...
research
06/25/2020

Stability Enhanced Privacy and Applications in Private Stochastic Gradient Descent

Private machine learning involves addition of noise while training, resu...
research
10/12/2022

Momentum Aggregation for Private Non-convex ERM

We introduce new algorithms and convergence guarantees for privacy-prese...
research
01/25/2022

Differentially Private Temporal Difference Learning with Stochastic Nonconvex-Strongly-Concave Optimization

Temporal difference (TD) learning is a widely used method to evaluate po...
research
05/30/2023

Clip21: Error Feedback for Gradient Clipping

Motivated by the increasing popularity and importance of large-scale tra...

Please sign up or login with your details

Forgot password? Click here to reset