Learning Numeric Optimal Differentially Private Truncated Additive Mechanisms

07/27/2021
by   David M. Sommer, et al.
0

Differentially private (DP) mechanisms face the challenge of providing accurate results while protecting their inputs: the privacy-utility trade-off. A simple but powerful technique for DP adds noise to sensitivity-bounded query outputs to blur the exact query output: additive mechanisms. While a vast body of work considers infinitely wide noise distributions, some applications (e.g., real-time operating systems) require hard bounds on the deviations from the real query, and only limited work on such mechanisms exist. An additive mechanism with truncated noise (i.e., with bounded range) can offer such hard bounds. We introduce a gradient-descent-based tool to learn truncated noise for additive mechanisms with strong utility bounds while simultaneously optimizing for differential privacy under sequential composition, i.e., scenarios where multiple noisy queries on the same data are revealed. Our method can learn discrete noise patterns and not only hyper-parameters of a predefined probability distribution. For sensitivity bounded mechanisms, we show that it is sufficient to consider symmetric and that, for from the mean monotonically falling noise, ensuring privacy for a pair of representative query outputs guarantees privacy for all pairs of inputs (that differ in one element). We find that the utility-privacy trade-off curves of our generated noise are remarkably close to truncated Gaussians and even replicate their shape for l_2 utility-loss. For a low number of compositions, we also improved DP-SGD (sub-sampling). Moreover, we extend Moments Accountant to truncated distributions, allowing to incorporate mechanism output events with varying input-dependent zero occurrence probability.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

04/15/2020

Unifying Privacy Loss Composition for Data Analytics

Differential privacy (DP) provides rigorous privacy guarantees on indivi...
10/12/2021

Not all noise is accounted equally: How differentially private learning benefits from large sampling rates

Learning often involves sensitive data and as such, privacy preserving e...
12/17/2020

Differential privacy and noisy confidentiality concepts for European population statistics

The paper aims to give an overview of various approaches to statistical ...
04/03/2022

A Differentially Private Framework for Deep Learning with Convexified Loss Functions

Differential privacy (DP) has been applied in deep learning for preservi...
06/19/2020

Differentially Private Variational Autoencoders with Term-wise Gradient Aggregation

This paper studies how to learn variational autoencoders with a variety ...
01/31/2022

Differentially Private Top-k Selection via Canonical Lipschitz Mechanism

Selecting the top-k highest scoring items under differential privacy (DP...
11/23/2021

Optimum Noise Mechanism for Differentially Private Queries in Discrete Finite Sets

In this paper, we provide an optimal additive noise mechanism for databa...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.