Privately Learning Thresholds: Closing the Exponential Gap

11/22/2019
by   Haim Kaplan, et al.
0

We study the sample complexity of learning threshold functions under the constraint of differential privacy. It is assumed that each labeled example in the training data is the information of one individual and we would like to come up with a generalizing hypothesis h while guaranteeing differential privacy for the individuals. Intuitively, this means that any single labeled example in the training data should not have a significant effect on the choice of the hypothesis. This problem has received much attention recently; unlike the non-private case, where the sample complexity is independent of the domain size and just depends on the desired accuracy and confidence, for private learning the sample complexity must depend on the domain size X (even for approximate differential privacy). Alon et al. (STOC 2019) showed a lower bound of Ω(log^*|X|) on the sample complexity and Bun et al. (FOCS 2015) presented an approximate-private learner with sample complexity Õ(2^log^*|X|). In this work we reduce this gap significantly, almost settling the sample complexity. We first present a new upper bound (algorithm) of Õ((log^*|X|)^2) on the sample complexity and then present an improved version with sample complexity Õ((log^*|X|)^1.5). Our algorithm is constructed for the related interior point problem, where the goal is to find a point between the largest and smallest input elements. It is based on selecting an input-dependent hash function and using it to embed the database into a domain whose size is reduced logarithmically; this results in a new database, an interior point of which can be used to generate an interior point in the original database in a differentially private manner.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/27/2019

Private Center Points and Learning of Halfspaces

We present a private learner for halfspaces over an arbitrary finite dom...
research
11/11/2022

Õptimal Differentially Private Learning of Thresholds and Quasi-Concave Optimization

The problem of learning threshold functions is a fundamental one in mach...
research
07/24/2021

On the Sample Complexity of Privately Learning Axis-Aligned Rectangles

We revisit the fundamental problem of learning Axis-Aligned-Rectangles o...
research
01/15/2019

Optimistic optimization of a Brownian

We address the problem of optimizing a Brownian motion. We consider a (r...
research
07/10/2014

Private Learning and Sanitization: Pure vs. Approximate Differential Privacy

We compare the sample complexity of private learning [Kasiviswanathan et...
research
12/03/2020

Compressive Privatization: Sparse Distribution Estimation under Locally Differentially Privacy

We consider the problem of discrete distribution estimation under locall...
research
01/28/2022

Transfer Learning In Differential Privacy's Hybrid-Model

The hybrid-model (Avent et al 2017) in Differential Privacy is a an augm...

Please sign up or login with your details

Forgot password? Click here to reset