Tighter Generalization Bounds for Iterative Differentially Private Learning Algorithms

07/18/2020
by   Fengxiang He, et al.
0

This paper studies the relationship between generalization and privacy preservation in iterative learning algorithms by two sequential steps. We first establish an alignment between generalization and privacy preservation for any learning algorithm. We prove that (ε, δ)-differential privacy implies an on-average generalization bound for multi-database learning algorithms which further leads to a high-probability bound for any learning algorithm. This high-probability bound also implies a PAC-learnable guarantee for differentially private learning algorithms. We then investigate how the iterative nature shared by most learning algorithms influence privacy preservation and further generalization. Three composition theorems are proposed to approximate the differential privacy of any iterative algorithm through the differential privacy of its every iteration. By integrating the above two steps, we eventually deliver generalization bounds for iterative learning algorithms, which suggest one can simultaneously enhance privacy preservation and generalization. Our results are strictly tighter than the existing works. Particularly, our generalization bounds do not rely on the model size which is prohibitively large in deep learning. This sheds light to understanding the generalizability of deep learning. These results apply to a wide spectrum of learning algorithms. In this paper, we apply them to stochastic gradient Langevin dynamics and agnostic federated learning as examples.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/11/2021

Can Stochastic Gradient Langevin Dynamics Provide Differential Privacy for Deep Learning?

Bayesian learning via Stochastic Gradient Langevin Dynamics (SGLD) has b...
research
02/02/2022

Tailoring Gradient Methods for Differentially-Private Distributed Optimization

Decentralized optimization is gaining increased traction due to its wide...
research
10/14/2017

Learners that Leak Little Information

We study learning algorithms that are restricted to revealing little inf...
research
02/26/2018

Data-dependent PAC-Bayes priors via differential privacy

The Probably Approximately Correct (PAC) Bayes framework (McAllester, 19...
research
02/01/2023

Privacy Risk for anisotropic Langevin dynamics using relative entropy bounds

The privacy preserving properties of Langevin dynamics with additive iso...
research
04/11/2022

Stability and Generalization of Differentially Private Minimax Problems

In the field of machine learning, many problems can be formulated as the...
research
05/23/2016

DP-EM: Differentially Private Expectation Maximization

The iterative nature of the expectation maximization (EM) algorithm pres...

Please sign up or login with your details

Forgot password? Click here to reset