Upper Bounds on the Generalization Error of Private Algorithms

05/12/2020
by   Borja Rodríguez Gálvez, et al.
2

In this work, we study the generalization capability of algorithms from an information-theoretic perspective. It has been shown that the generalization error of an algorithm is bounded from above in terms of the mutual information between the algorithm's output hypothesis and the dataset with which it was trained. We build upon this fact and introduce a mathematical formulation to obtain upper bounds on this mutual information. We then develop a strategy using this formulation, based on the method of types and typicality, to find explicit upper bounds on the generalization error of smooth algorithms, i.e., algorithms that produce similar output hypotheses given similar input datasets. In particular, we show the bounds obtained with this strategy for the case of ϵ-DP and μ-GDP algorithms.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset