The Price of Differential Privacy under Continual Observation

12/01/2021
by   Palak Jain, et al.
0

We study the accuracy of differentially private mechanisms in the continual release model. A continual release mechanism receives a sensitive dataset as a stream of T inputs and produces, after receiving each input, an accurate output on the obtained inputs. In contrast, a batch algorithm receives the data as one batch and produces a single output. We provide the first strong lower bounds on the error of continual release mechanisms. In particular, for two fundamental problems that are widely studied and used in the batch model, we show that the worst case error of every continual release algorithm is Ω̃(T^1/3) times larger than that of the best batch algorithm. Previous work shows only a polylogarithimic (in T) gap between the worst case error achievable in these two models; further, for many problems, including the summation of binary attributes, the polylogarithmic gap is tight (Dwork et al., 2010; Chan et al., 2010). Our results show that problems closely related to summation – specifically, those that require selecting the largest of a set of sums – are fundamentally harder in the continual release model than in the batch model. Our lower bounds assume only that privacy holds for streams fixed in advance (the "nonadaptive" setting). However, we provide matching upper bounds that hold in a model where privacy is required even for adaptively selected streams. This model may be of independent interest.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

06/12/2020

Stability of Stochastic Gradient Descent on Nonsmooth Convex Losses

Uniform stability is a notion of algorithmic stability that bounds the w...
11/08/2018

Private Continual Release of Real-Valued Data Streams

We present a differentially private mechanism to display statistics (e.g...
03/31/2021

Differentially Private Histograms under Continual Observation: Streaming Selection into the Unknown

We generalize the continuous observation privacy setting from Dwork et a...
09/07/2018

Differentially Private Continual Release of Graph Statistics

Motivated by understanding the dynamics of sensitive social networks ove...
02/23/2022

Constant matters: Fine-grained Complexity of Differentially Private Continual Observation

We study fine-grained error bounds for differentially private algorithms...
06/17/2020

Smoothed Analysis of Online and Differentially Private Learning

Practical and pervasive needs for robustness and privacy in algorithms h...
08/05/2021

Adapting to Function Difficulty and Growth Conditions in Private Optimization

We develop algorithms for private stochastic convex optimization that ad...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.