Improved Rates for Differentially Private Stochastic Convex Optimization with Heavy-Tailed Data

06/02/2021
by   Gautam Kamath, et al.
0

We study stochastic convex optimization with heavy-tailed data under the constraint of differential privacy. Most prior work on this problem is restricted to the case where the loss function is Lipschitz. Instead, as introduced by Wang, Xiao, Devadas, and Xu, we study general convex loss functions with the assumption that the distribution of gradients has bounded k-th moments. We provide improved upper bounds on the excess population risk under approximate differential privacy of Õ(√(d/n)+(d/ϵ n)^k-1/k) and Õ(d/n+(d/ϵ n)^2k-2/k) for convex and strongly convex loss functions, respectively. We also prove nearly-matching lower bounds under the constraint of pure differential privacy, giving strong evidence that our bounds are tight.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/21/2020

On Differentially Private Stochastic Convex Optimization with Heavy-tailed Data

In this paper, we consider the problem of designing Differentially Priva...
research
03/02/2021

Private Stochastic Convex Optimization: Optimal Rates in ℓ_1 Geometry

Stochastic convex optimization over an ℓ_1-bounded domain is ubiquitous ...
research
05/07/2021

Differential Privacy for Pairwise Learning: Non-convex Analysis

Pairwise learning focuses on learning tasks with pairwise loss functions...
research
06/17/2021

Shuffle Private Stochastic Convex Optimization

In shuffle privacy, each user sends a collection of randomized messages ...
research
06/27/2022

Efficient Private SCO for Heavy-Tailed Data via Clipping

We consider stochastic convex optimization for heavy-tailed data with th...
research
07/23/2021

High Dimensional Differentially Private Stochastic Optimization with Heavy-tailed Data

As one of the most fundamental problems in machine learning, statistics ...
research
09/15/2022

Private Stochastic Optimization in the Presence of Outliers: Optimal Rates for (Non-Smooth) Convex Losses and Extension to Non-Convex Losses

We study differentially private (DP) stochastic optimization (SO) with d...

Please sign up or login with your details

Forgot password? Click here to reset