Adaptive Statistical Learning with Bayesian Differential Privacy

11/02/2019
by   Jun Zhao, et al.
8

In statistical learning, a dataset is often partitioned into two parts: the training set and the holdout (i.e., testing) set. For instance, the training set is used to learn a predictor, and then the holdout set is used for estimating the accuracy of the predictor on the true distribution. However, often in practice, the holdout dataset is reused and the estimates tested on the holdout dataset are chosen adaptively based on the results of prior estimates, leading to that the predictor may become dependent of the holdout set. Hence, overfitting may occur, and the learned models may not generalize well to the unseen datasets. Prior studies have established connections between the stability of a learning algorithm and its ability to generalize, but the traditional generalization is not robust to adaptive composition. Recently, Dwork et al. in NIPS, STOC, and Science 2015 show that the holdout dataset from i.i.d. data samples can be reused in adaptive statistical learning, if the estimates are perturbed and coordinated using techniques developed for differential privacy, which is a widely used notion to quantify privacy. Yet, the results of Dwork et al. are applicable to only the case of i.i.d. samples. In contrast, correlations between data samples exist because of various behavioral, social, and genetic relationships between users. Our results in adaptive statistical learning generalize the results of Dwork et al. for i.i.d. data samples to arbitrarily correlated data. Specifically, we show that the holdout dataset from correlated samples can be reused in adaptive statistical learning, if the estimates are perturbed and coordinated using techniques developed for Bayesian differential privacy, which is a privacy notion recently introduced by Yang et al. in SIGMOD 2015 to broaden the application scenarios of differential privacy when data records are correlated.

READ FULL TEXT

page 2

page 3

page 4

page 5

page 6

page 9

page 10

page 11

research
05/25/2023

Learning across Data Owners with Joint Differential Privacy

In this paper, we study the setting in which data owners train machine l...
research
08/01/2020

Correlated Data in Differential Privacy: Definition and Analysis

Differential privacy is a rigorous mathematical framework for evaluating...
research
06/12/2019

Does Learning Require Memorization? A Short Tale about a Long Tail

State-of-the-art results on image recognition tasks are achieved using o...
research
04/07/2023

Replicability and stability in learning

Replicability is essential in science as it allows us to validate and ve...
research
03/01/2017

Preserving Differential Privacy Between Features in Distributed Estimation

Privacy is crucial in many applications of machine learning. Legal, ethi...
research
03/22/2023

Stability is Stable: Connections between Replicability, Privacy, and Adaptive Generalization

The notion of replicable algorithms was introduced in Impagliazzo et al....
research
05/03/2019

Exploring Differential Obliviousness

In a recent paper Chan et al. [SODA '19] proposed a relaxation of the no...

Please sign up or login with your details

Forgot password? Click here to reset