Private Learning and Sanitization: Pure vs. Approximate Differential Privacy

07/10/2014
by   Amos Beimel, et al.
0

We compare the sample complexity of private learning [Kasiviswanathan et al. 2008] and sanitization [Blum et al. 2008] under pure ϵ-differential privacy [Dwork et al. TCC 2006] and approximate (ϵ,δ)-differential privacy [Dwork et al. Eurocrypt 2006]. We show that the sample complexity of these tasks under approximate differential privacy can be significantly lower than that under pure differential privacy. We define a family of optimization problems, which we call Quasi-Concave Promise Problems, that generalizes some of our considered tasks. We observe that a quasi-concave promise problem can be privately approximated using a solution to a smaller instance of a quasi-concave promise problem. This allows us to construct an efficient recursive algorithm solving such problems privately. Specifically, we construct private learners for point functions, threshold functions, and axis-aligned rectangles in high dimension. Similarly, we construct sanitizers for point functions and threshold functions. We also examine the sample complexity of label-private learners, a relaxation of private learning where the learner is required to only protect the privacy of the labels in the sample. We show that the VC dimension completely characterizes the sample complexity of such learners, that is, the sample complexity of learning with label privacy is equal (up to constants) to learning without privacy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/07/2020

Sample-efficient proper PAC learning with approximate differential privacy

In this paper we prove that the sample complexity of properly learning a...
research
11/22/2019

Privately Learning Thresholds: Closing the Exponential Gap

We study the sample complexity of learning threshold functions under the...
research
05/16/2023

Private Everlasting Prediction

A private learner is trained on a sample of labeled points and generates...
research
01/28/2022

Transfer Learning In Differential Privacy's Hybrid-Model

The hybrid-model (Avent et al 2017) in Differential Privacy is a an augm...
research
07/24/2021

On the Sample Complexity of Privately Learning Axis-Aligned Rectangles

We revisit the fundamental problem of learning Axis-Aligned-Rectangles o...
research
09/17/2020

The Limits of Pan Privacy and Shuffle Privacy for Learning and Estimation

There has been a recent wave of interest in intermediate trust models fo...
research
05/03/2019

Exploring Differential Obliviousness

In a recent paper Chan et al. [SODA '19] proposed a relaxation of the no...

Please sign up or login with your details

Forgot password? Click here to reset