Shuffle Private Stochastic Convex Optimization

06/17/2021
by   Albert Cheu, et al.
0

In shuffle privacy, each user sends a collection of randomized messages to a trusted shuffler, the shuffler randomly permutes these messages, and the resulting shuffled collection of messages must satisfy differential privacy. Prior work in this model has largely focused on protocols that use a single round of communication to compute algorithmic primitives like means, histograms, and counts. In this work, we present interactive shuffle protocols for stochastic convex optimization. Our optimization protocols rely on a new noninteractive protocol for summing vectors of bounded ℓ_2 norm. By combining this sum subroutine with techniques including mini-batch stochastic gradient descent, accelerated gradient descent, and Nesterov's smoothing method, we obtain loss guarantees for a variety of convex loss functions that significantly improve on those of the local model and sometimes match those of the central model.

READ FULL TEXT

page 1

page 2

page 3

page 4

06/02/2021

Improved Rates for Differentially Private Stochastic Convex Optimization with Heavy-Tailed Data

We study stochastic convex optimization with heavy-tailed data under the...
12/19/2021

Pure Differential Privacy from Secure Intermediaries

Recent work in differential privacy has explored the prospect of combini...
07/17/2018

Jensen: An Easily-Extensible C++ Toolkit for Production-Level Machine Learning and Convex Optimization

This paper introduces Jensen, an easily extensible and scalable toolkit ...
06/25/2020

Stability Enhanced Privacy and Applications in Private Stochastic Gradient Descent

Private machine learning involves addition of noise while training, resu...
02/22/2020

Private Stochastic Convex Optimization: Efficient Algorithms for Non-smooth Objectives

In this paper, we revisit the problem of private stochastic convex optim...
05/27/2022

Privacy of Noisy Stochastic Gradient Descent: More Iterations without More Privacy Loss

A central issue in machine learning is how to train models on sensitive ...
12/03/2019

Improving upon NBA point-differential rankings

For some time, point-differential has been thought to be a better predic...