Tight Lower Bounds for Locally Differentially Private Selection

02/07/2018
by   Jonathan Ullman, et al.
0

We prove a tight lower bound (up to constant factors) on the sample complexity of any non-interactive local differentially private protocol for optimizing a linear function over the simplex. This lower bound also implies a tight lower bound (again, up to constant factors) on the sample complexity of any non-interactive local differentially private protocol implementing the exponential mechanism. These results reveal that any local protocol for these problems has exponentially worse dependence on the dimension than corresponding algorithms in the central model. Previously, Kasiviswanathan et al. (FOCS 2008) proved an exponential separation between local and central model algorithms for PAC learning the class of parity functions. In contrast, our lower bound are quantitatively tight, apply to a simple and natural class of linear optimization problems, and our techniques are arguably simpler.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/01/2019

Exponential Separations in Local Differential Privacy Through Communication Complexity

We prove a general connection between the communication complexity of tw...
research
07/14/2023

Smooth Lower Bounds for Differentially Private Algorithms via Padding-and-Permuting Fingerprinting Codes

Fingerprinting arguments, first introduced by Bun, Ullman, and Vadhan (S...
research
11/13/2017

Heavy Hitters and the Structure of Local Privacy

We present a new locally differentially private algorithm for the heavy ...
research
11/11/2019

Interaction is necessary for distributed learning with privacy or communication constraints

Local differential privacy (LDP) is a model where users send privatized ...
research
11/11/2022

Õptimal Differentially Private Learning of Thresholds and Quasi-Concave Optimization

The problem of learning threshold functions is a fundamental one in mach...
research
09/08/2022

Improved Robust Algorithms for Learning with Discriminative Feature Feedback

Discriminative Feature Feedback is a setting proposed by Dastupta et al....
research
02/19/2020

Quantum statistical query learning

We propose a learning model called the quantum statistical learning QSQ ...

Please sign up or login with your details

Forgot password? Click here to reset