Model-Agnostic Private Learning via Stability

03/14/2018
by   Raef Bassily, et al.
0

We design differentially private learning algorithms that are agnostic to the learning model. Our algorithms are interactive in nature, i.e., instead of outputting a model based on the training data, they provide predictions for a set of m feature vectors that arrive online. We show that, for the feature vectors on which an ensemble of models (trained on random disjoint subsets of a dataset) makes consistent predictions, there is almost no-cost of privacy in generating accurate predictions for those feature vectors. To that end, we provide a novel coupling of the distance to instability framework with the sparse vector technique. We provide algorithms with formal privacy and utility guarantees for both binary/multi-class classification, and soft-label classification. For binary classification in the standard (agnostic) PAC model, we show how to bootstrap from our privately generated predictions to construct a computationally efficient private learner that outputs a final accurate hypothesis. Our construction - to the best of our knowledge - is the first computationally efficient construction for a label-private learner. We prove sample complexity upper bounds for this setting. As in non-private sample complexity bounds, the only relevant property of the given concept class is its VC dimension. For soft-label classification, our techniques are based on exploiting the stability properties of traditional learning algorithms, like stochastic gradient descent (SGD). We provide a new technique to boost the average-case stability properties of learning algorithms to strong (worst-case) stability properties, and then exploit them to obtain private classification algorithms. In the process, we also show that a large class of SGD methods satisfy average-case stability properties, in contrast to a smaller class of SGD methods that are uniformly stable as shown in prior work.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/24/2019

PAC learning with stable and private predictions

We study binary classification algorithms for which the prediction on an...
research
07/31/2019

Privately Answering Classification Queries in the Agnostic PAC Model

We revisit the problem of differentially private release of classificati...
research
09/08/2022

Improved Robust Algorithms for Learning with Discriminative Feature Feedback

Discriminative Feature Feedback is a setting proposed by Dastupta et al....
research
03/10/2020

Closure Properties for Private Classification and Online Prediction

Let H be a class of boolean functions and consider acomposed class H' th...
research
06/25/2021

Private Adaptive Gradient Methods for Convex Optimization

We study adaptive methods for differentially private convex optimization...
research
06/17/2020

Smoothed Analysis of Online and Differentially Private Learning

Practical and pervasive needs for robustness and privacy in algorithms h...
research
02/25/2021

Machine Unlearning via Algorithmic Stability

We study the problem of machine unlearning and identify a notion of algo...

Please sign up or login with your details

Forgot password? Click here to reset