-
Continuous Release of Data Streams under both Centralized and Local Differential Privacy
In this paper, we study the problem of publishing a stream of real-value...
read it
-
Differentially Private Continual Release of Graph Statistics
Motivated by understanding the dynamics of sensitive social networks ove...
read it
-
Differentially Private Release of High-Dimensional Datasets using the Gaussian Copula
We propose a generic mechanism to efficiently release differentially pri...
read it
-
Differentially Private Mechanisms for Count Queries
In this paper, we consider the problem of responding to a count query (o...
read it
-
Differentially private anonymized histograms
For a dataset of label-count pairs, an anonymized histogram is the multi...
read it
-
Free Gap Information from the Differentially Private Sparse Vector and Noisy Max Mechanisms
Noisy Max and Sparse Vector are selection algorithms for differential pr...
read it
-
OptStream: Releasing Time Series Privately
Many applications of machine learning and optimization operate on data s...
read it
Private Continual Release of Real-Valued Data Streams
We present a differentially private mechanism to display statistics (e.g., the moving average) of a stream of real valued observations where the bound on each observation is either too conservative or unknown in advance. This is particularly relevant to scenarios of real-time data monitoring and reporting, e.g., energy data through smart meters. Our focus is on real-world data streams whose distribution is light-tailed, meaning that the tail approaches zero at least as fast as the exponential distribution. For such data streams, individual observations are expected to be concentrated below an unknown threshold. Estimating this threshold from the data can potentially violate privacy as it would reveal particular events tied to individuals [1]. On the other hand an overly conservative threshold may impact accuracy by adding more noise than necessary. We construct a utility optimizing differentially private mechanism to release this threshold based on the input stream. Our main advantage over the state-of-the-art algorithms is that the resulting noise added to each observation of the stream is scaled to the threshold instead of a possibly much larger bound; resulting in considerable gain in utility when the difference is significant. Using two real-world datasets, we demonstrate that our mechanism, on average, improves the utility by a factor of 3.5 on the first dataset, and 9 on the other. While our main focus is on continual release of statistics, our mechanism for releasing the threshold can be used in various other applications where a (privacy-preserving) measure of the scale of the input distribution is required.
READ FULL TEXT
Comments
There are no comments yet.