Differentially Private Algorithms for the Stochastic Saddle Point Problem with Optimal Rates for the Strong Gap

02/24/2023
by   Raef Bassily, et al.
0

We show that convex-concave Lipschitz stochastic saddle point problems (also known as stochastic minimax optimization) can be solved under the constraint of (ϵ,δ)-differential privacy with strong (primal-dual) gap rate of Õ(1/√(n) + √(d)/nϵ), where n is the dataset size and d is the dimension of the problem. This rate is nearly optimal, based on existing lower bounds in differentially private stochastic optimization. Specifically, we prove a tight upper bound on the strong gap via novel implementation and analysis of the recursive regularization technique repurposed for saddle point problems. We show that this rate can be attained with O(min{n^2ϵ^1.5/√(d), n^3/2}) gradient complexity, and O(n) gradient complexity if the loss function is smooth. As a byproduct of our method, we develop a general algorithm that, given a black-box access to a subroutine satisfying a certain α primal-dual accuracy guarantee with respect to the empirical objective, gives a solution to the stochastic saddle point problem with a strong gap of Õ(α+1/√(n)). We show that this α-accuracy condition is satisfied by standard algorithms for the empirical saddle point problem such as the proximal point method and the stochastic gradient descent ascent algorithm. Further, we show that even for simple problems it is possible for an algorithm to have zero weak gap and suffer from Ω(1) strong gap. We also show that there exists a fundamental tradeoff between stability and accuracy. Specifically, we show that any Δ-stable algorithm has empirical gap Ω(1/Δ n), and that this bound is tight. This result also holds also more specifically for empirical risk minimization problems and may be of independent interest.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/14/2018

Differentially Private Empirical Risk Minimization Revisited: Faster and More General

In this paper we study the differentially private Empirical Risk Minimiz...
research
01/22/2022

Differentially Private SGDA for Minimax Problems

Stochastic gradient descent ascent (SGDA) and its variants have been the...
research
05/27/2014

Differentially Private Empirical Risk Minimization: Efficient Algorithms and Tight Error Bounds

In this paper, we initiate a systematic investigation of differentially ...
research
04/07/2021

Optimal Algorithms for Differentially Private Stochastic Monotone Variational Inequalities and Saddle-Point Problems

In this work, we conduct the first systematic study of stochastic variat...
research
11/20/2014

Private Empirical Risk Minimization Beyond the Worst Case: The Effect of the Constraint Set Geometry

Empirical Risk Minimization (ERM) is a standard technique in machine lea...
research
06/09/2022

What is a Good Metric to Study Generalization of Minimax Learners?

Minimax optimization has served as the backbone of many machine learning...
research
01/25/2022

Differentially Private Temporal Difference Learning with Stochastic Nonconvex-Strongly-Concave Optimization

Temporal difference (TD) learning is a widely used method to evaluate po...

Please sign up or login with your details

Forgot password? Click here to reset