Differentially Private Stochastic Gradient Descent with Low-Noise

09/09/2022
by   Puyu Wang, et al.
0

In this paper, by introducing a low-noise condition, we study privacy and utility (generalization) performances of differentially private stochastic gradient descent (SGD) algorithms in a setting of stochastic convex optimization (SCO) for both pointwise and pairwise learning problems. For pointwise learning, we establish sharper excess risk bounds of order 𝒪( √(dlog(1/δ))/nϵ) and 𝒪( n^- 1+α/2+√(dlog(1/δ))/nϵ) for the (ϵ,δ)-differentially private SGD algorithm for strongly smooth and α-Hölder smooth losses, respectively, where n is the sample size and d is the dimensionality. For pairwise learning, inspired by <cit.>, we propose a simple private SGD algorithm based on gradient perturbation which satisfies (ϵ,δ)-differential privacy, and develop novel utility bounds for the proposed algorithm. In particular, we prove that our algorithm can achieve excess risk rates 𝒪(1/√(n)+√(dlog(1/δ))/nϵ) with gradient complexity 𝒪(n) and 𝒪(n^2-α/1+α+n) for strongly smooth and α-Hölder smooth losses, respectively. Further, faster learning rates are established in a low-noise setting for both smooth and non-smooth losses. To the best of our knowledge, this is the first utility analysis which provides excess population bounds better than 𝒪(1/√(n)+√(dlog(1/δ))/nϵ) for privacy-preserving pairwise learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/22/2021

Differentially Private SGD with Non-Smooth Loss

In this paper, we are concerned with differentially private SGD algorith...
research
01/22/2022

Differentially Private SGDA for Minimax Problems

Stochastic gradient descent ascent (SGDA) and its variants have been the...
research
06/12/2020

Stability of Stochastic Gradient Descent on Nonsmooth Convex Losses

Uniform stability is a notion of algorithmic stability that bounds the w...
research
04/22/2022

Sharper Utility Bounds for Differentially Private Models

In this paper, by introducing Generalized Bernstein condition, we propos...
research
03/29/2017

Efficient Private ERM for Smooth Objectives

In this paper, we consider efficient differentially private empirical ri...
research
10/30/2019

Efficient Privacy-Preserving Nonconvex Optimization

While many solutions for privacy-preserving convex empirical risk minimi...
research
11/23/2021

Simple Stochastic and Online Gradient Descent Algorithms for Pairwise Learning

Pairwise learning refers to learning tasks where the loss function depen...

Please sign up or login with your details

Forgot password? Click here to reset