Differentially Private SGDA for Minimax Problems

01/22/2022
by   Zhenhuan Yang, et al.
0

Stochastic gradient descent ascent (SGDA) and its variants have been the workhorse for solving minimax problems. However, in contrast to the well-studied stochastic gradient descent (SGD) with differential privacy (DP) constraints, there is little work on understanding the generalization (utility) of SGDA with DP constraints. In this paper, we use the algorithmic stability approach to establish the generalization (utility) of DP-SGDA in different settings. In particular, for the convex-concave setting, we prove that the DP-SGDA can achieve an optimal utility rate in terms of the weak primal-dual population risk in both smooth and non-smooth cases. To our best knowledge, this is the first-ever-known result for DP-SGDA in the non-smooth case. We further provide its utility analysis in the nonconvex-strongly-concave setting which is the first-ever-known result in terms of the primal population risk. The convergence and generalization results for this nonconvex setting are new even in the non-private setting. Finally, numerical experiments are conducted to demonstrate the effectiveness of DP-SGDA for both convex and nonconvex cases.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/09/2022

Differentially Private Stochastic Gradient Descent with Low-Noise

In this paper, by introducing a low-noise condition, we study privacy an...
research
01/25/2022

Differentially Private Temporal Difference Learning with Stochastic Nonconvex-Strongly-Concave Optimization

Temporal difference (TD) learning is a widely used method to evaluate po...
research
02/08/2023

DIFF2: Differential Private Optimization via Gradient Differences for Nonconvex Distributed Learning

Differential private optimization for nonconvex smooth objective is cons...
research
02/16/2022

Data-Driven Minimax Optimization with Expectation Constraints

Attention to data-driven optimization approaches, including the well-kno...
research
02/24/2023

Differentially Private Algorithms for the Stochastic Saddle Point Problem with Optimal Rates for the Strong Gap

We show that convex-concave Lipschitz stochastic saddle point problems (...
research
04/07/2021

Optimal Algorithms for Differentially Private Stochastic Monotone Variational Inequalities and Saddle-Point Problems

In this work, we conduct the first systematic study of stochastic variat...
research
04/11/2022

Stability and Generalization of Differentially Private Minimax Problems

In the field of machine learning, many problems can be formulated as the...

Please sign up or login with your details

Forgot password? Click here to reset