The Limits of Pan Privacy and Shuffle Privacy for Learning and Estimation

09/17/2020
by   Albert Cheu, et al.
0

There has been a recent wave of interest in intermediate trust models for differential privacy that eliminate the need for a fully trusted central data collector, but overcome the limitations of local differential privacy. This interest has led to the introduction of the shuffle model (Cheu et al., EUROCRYPT 2019; Erlingsson et al., SODA 2019) and revisiting the pan-private model (Dwork et al., ITCS 2010). The message of this line of work is that, for a variety of low-dimensional problems—such as counts, means, and histograms—these intermediate models offer nearly as much power as central differential privacy. However, there has been considerably less success using these models for high-dimensional learning and estimation problems. In this work, we show that, for a variety of high-dimensional learning and estimation problems, both the shuffle model and the pan-private model inherently incur an exponential price in sample complexity relative to the central model. For example, we show that, private agnostic learning of parity functions over d bits requires Ω(2^d/2) samples in these models, and privately selecting the most common attribute from a set of d choices requires Ω(d^1/2) samples, both of which are exponential separations from the central model. Our work gives the first non-trivial lower bounds for these problems for both the pan-private model and the general multi-message shuffle model.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/25/2021

Differential Privacy in the Shuffle Model: A Survey of Separations

Differential privacy is often studied in one of two models. In the centr...
research
06/20/2022

Walking to Hide: Privacy Amplification via Random Message Exchanges in Network

The *shuffle model* is a powerful tool to amplify the privacy guarantees...
research
09/11/2020

Multi-Central Differential Privacy

Differential privacy is typically studied in the central model where a t...
research
07/10/2014

Private Learning and Sanitization: Pure vs. Approximate Differential Privacy

We compare the sample complexity of private learning [Kasiviswanathan et...
research
11/19/2019

The Power of Factorization Mechanisms in Local and Central Differential Privacy

We give new characterizations of the sample complexity of answering line...
research
10/23/2020

Learning to Noise: Application-Agnostic Data Sharing with Local Differential Privacy

In recent years, the collection and sharing of individuals' private data...
research
02/21/2020

Privately Learning Markov Random Fields

We consider the problem of learning Markov Random Fields (including the ...

Please sign up or login with your details

Forgot password? Click here to reset