Differentially Private Online-to-Batch for Smooth Losses

10/12/2022
by   Qinzi Zhang, et al.
0

We develop a new reduction that converts any online convex optimization algorithm suffering O(√(T)) regret into an ϵ-differentially private stochastic convex optimization algorithm with the optimal convergence rate Õ(1/√(T) + √(d)/ϵ T) on smooth losses in linear time, forming a direct analogy to the classical non-private "online-to-batch" conversion. By applying our techniques to more advanced adaptive online algorithms, we produce adaptive differentially private counterparts whose convergence rates depend on apriori unknown variances or parameter norms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/12/2021

Differentially Private Stochastic Optimization: New Results in Convex and Non-Convex Settings

We study differentially private stochastic optimization in convex and no...
research
02/24/2019

Efficient Private Algorithms for Learning Halfspaces

We present new differentially private algorithms for learning a large-ma...
research
03/29/2021

Private Non-smooth Empirical Risk Minimization and Stochastic Convex Optimization in Subquadratic Steps

We study the differentially private Empirical Risk Minimization (ERM) an...
research
07/06/2018

Differentially Private Online Submodular Optimization

In this paper we develop the first algorithms for online submodular mini...
research
02/09/2023

Differentially Private Optimization for Smooth Nonconvex ERM

We develop simple differentially private optimization algorithms that mo...
research
12/22/2020

Projection-Free Bandit Optimization with Privacy Guarantees

We design differentially private algorithms for the bandit convex optimi...
research
06/25/2021

Private Adaptive Gradient Methods for Convex Optimization

We study adaptive methods for differentially private convex optimization...

Please sign up or login with your details

Forgot password? Click here to reset