Shuffle Private Stochastic Convex Optimization

06/17/2021
by   Albert Cheu, et al.
0

In shuffle privacy, each user sends a collection of randomized messages to a trusted shuffler, the shuffler randomly permutes these messages, and the resulting shuffled collection of messages must satisfy differential privacy. Prior work in this model has largely focused on protocols that use a single round of communication to compute algorithmic primitives like means, histograms, and counts. In this work, we present interactive shuffle protocols for stochastic convex optimization. Our optimization protocols rely on a new noninteractive protocol for summing vectors of bounded ℓ_2 norm. By combining this sum subroutine with techniques including mini-batch stochastic gradient descent, accelerated gradient descent, and Nesterov's smoothing method, we obtain loss guarantees for a variety of convex loss functions that significantly improve on those of the local model and sometimes match those of the central model.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/02/2021

Improved Rates for Differentially Private Stochastic Convex Optimization with Heavy-Tailed Data

We study stochastic convex optimization with heavy-tailed data under the...
research
02/11/2021

Differential Privacy Dynamics of Langevin Diffusion and Noisy Gradient Descent

We model the dynamics of privacy loss in Langevin diffusion and extend i...
research
07/17/2018

Jensen: An Easily-Extensible C++ Toolkit for Production-Level Machine Learning and Convex Optimization

This paper introduces Jensen, an easily extensible and scalable toolkit ...
research
08/20/2018

Privacy Amplification by Iteration

Many commonly used learning algorithms work by iteratively updating an i...
research
01/01/2023

ReSQueing Parallel and Private Stochastic Convex Optimization

We introduce a new tool for stochastic convex optimization (SCO): a Rewe...
research
05/27/2022

Privacy of Noisy Stochastic Gradient Descent: More Iterations without More Privacy Loss

A central issue in machine learning is how to train models on sensitive ...
research
03/02/2021

Private Stochastic Convex Optimization: Optimal Rates in ℓ_1 Geometry

Stochastic convex optimization over an ℓ_1-bounded domain is ubiquitous ...

Please sign up or login with your details

Forgot password? Click here to reset