DeepAI
Log In Sign Up

Output Perturbation for Differentially Private Convex Optimization with Improved Population Loss Bounds, Runtimes and Applications to Private Adversarial Training

02/09/2021
by   Andrew Lowy, et al.
0

Finding efficient, easily implementable differentially private (DP) algorithms that offer strong excess risk bounds is an important problem in modern machine learning. To date, most work has focused on private empirical risk minimization (ERM) or private population loss minimization. However, there are often other objectives–such as fairness, adversarial robustness, or sensitivity to outliers–besides average performance that are not captured in the classical ERM setup. To this end, we study a completely general family of convex, Lipschitz loss functions and establish the first known DP excess risk and runtime bounds for optimizing this broad class. We provide similar bounds under additional assumptions of smoothness and/or strong convexity. We also address private stochastic convex optimization (SCO). While (ϵ, δ)-DP (δ > 0) has been the focus of much recent work in private SCO, proving tight population loss bounds and runtime bounds for (ϵ, 0)-DP remains a challenging open problem. We provide the tightest known (ϵ, 0)-DP population loss bounds and fastest runtimes under the presence of (or lack of) smoothness and strong convexity. Our methods extend to the δ > 0 setting, where we offer the unique benefit of ensuring differential privacy for arbitrary ϵ > 0 by incorporating a new form of Gaussian noise. Finally, we apply our theory to two learning frameworks: tilted ERM and adversarial learning. In particular, our theory quantifies tradeoffs between adversarial robustness, privacy, and runtime. Our results are achieved using perhaps the simplest DP algorithm: output perturbation. Although this method is not novel conceptually, our novel implementation scheme and analysis show that the power of this method to achieve strong privacy, utility, and runtime guarantees has not been fully appreciated in prior works.

READ FULL TEXT

page 1

page 2

page 3

page 4

08/27/2019

Private Stochastic Convex Optimization with Optimal Rates

We study differentially private (DP) algorithms for stochastic convex op...
07/31/2021

Faster Rates of Differentially Private Stochastic Convex Optimization

In this paper, we revisit the problem of Differentially Private Stochast...
09/03/2019

Differentially Private Objective Perturbation: Beyond Smoothness and Convexity

One of the most effective algorithms for differentially private learning...
10/21/2020

On Differentially Private Stochastic Convex Optimization with Heavy-tailed Data

In this paper, we consider the problem of designing Differentially Priva...
06/02/2022

Faster Rates of Convergence to Stationary Points in Differentially Private Optimization

We study the problem of approximating stationary points of Lipschitz and...
03/01/2021

Wide Network Learning with Differential Privacy

Despite intense interest and considerable effort, the current generation...
03/04/2021

Remember What You Want to Forget: Algorithms for Machine Unlearning

We study the problem of forgetting datapoints from a learnt model. In th...