Learners that Leak Little Information

10/14/2017
by   Raef Bassily, et al.
0

We study learning algorithms that are restricted to revealing little information about their input sample. Various manifestations of this notion have been recently studied. A central theme in these works, and in ours, is that such algorithms generalize. We study a category of learning algorithms, which we term d-bit information learners. These are algorithms whose output conveys at most d bits of information on their input. We focus on the learning capacity of such algorithms: we prove generalization bounds with tight dependencies on the confidence and error parameters. We observe connections with well studied notions such as PAC-Bayes and differential privacy. For example, it is known that pure differentially private algorithms leak little information. We complement this fact with a separation between bounded information and pure differential privacy in the setting of proper learning, showing that differential privacy is strictly more restrictive. We also demonstrate limitations by exhibiting simple concept classes for which every (possibly randomized) empirical risk minimizer must leak a lot of information. On the other hand, we show that in the distribution-dependent setting every VC class has empirical risk minimizers that do not leak a lot of information.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/31/2018

Subsampled Rényi Differential Privacy and Analytical Moments Accountant

We study the problem of subsampling in differential privacy (DP), a ques...
research
07/10/2023

A unifying framework for differentially private quantum algorithms

Differential privacy is a widely used notion of security that enables th...
research
07/18/2020

Tighter Generalization Bounds for Iterative Differentially Private Learning Algorithms

This paper studies the relationship between generalization and privacy p...
research
07/03/2019

Capacity Bounded Differential Privacy

Differential privacy, a notion of algorithmic stability, is a gold stand...
research
04/01/2020

Differential Privacy for Sequential Algorithms

We study the differential privacy of sequential statistical inference an...
research
12/26/2017

Entropy-SGD optimizes the prior of a PAC-Bayes bound: Data-dependent PAC-Bayes priors via differential privacy

We show that Entropy-SGD (Chaudhari et al., 2016), when viewed as a lear...
research
03/22/2023

Stability is Stable: Connections between Replicability, Privacy, and Adaptive Generalization

The notion of replicable algorithms was introduced in Impagliazzo et al....

Please sign up or login with your details

Forgot password? Click here to reset