Generalization Bounds via Information Density and Conditional Information Density

05/16/2020
by   Fredrik Hellström, et al.
8

We present a general approach, based on an exponential inequality, to derive bounds on the generalization error of randomized learning algorithms. Using this approach, we provide bounds on the average generalization error as well as bounds on its tail probability, for both the PAC-Bayesian and single-draw scenarios. Specifically, for the case of subgaussian loss functions, we obtain novel bounds that depend on the information density between the training data and the output hypothesis. When suitably weakened, these bounds recover many of the information-theoretic available bounds in the literature. We also extend the proposed exponential-inequality approach to the setting recently introduced by Steinke and Zakynthinou (2020), where the learning algorithm depends on a randomly selected subset of the available training data. For this setup, we present bounds for bounded loss functions in terms of the conditional information density between the output hypothesis and the random variable determining the subset choice, given all training data. Through our approach, we recover the average generalization bound presented by Steinke and Zakynthinou (2020) and extend it to the PAC-Bayesian and single-draw scenarios. For the single-draw scenario, we also obtain novel bounds in terms of the conditional α-mutual information and the conditional maximal leakage.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/22/2020

Nonvacuous Loss Bounds with Fast Rates for Neural Networks via Conditional Information Measures

We present a framework to derive bounds on the test loss of randomized l...
research
04/20/2020

Generalization Error Bounds via mth Central Moments of the Information Density

We present a general approach to deriving bounds on the generalization e...
research
10/12/2022

A New Family of Generalization Bounds Using Samplewise Evaluated CMI

We present a new family of information-theoretic generalization bounds, ...
research
05/07/2014

A Mathematical Theory of Learning

In this paper, a mathematical theory of learning is proposed that has ma...
research
06/29/2022

Understanding Generalization via Leave-One-Out Conditional Mutual Information

We study the mutual information between (certain summaries of) the outpu...
research
10/21/2020

On Random Subset Generalization Error Bounds and the Stochastic Gradient Langevin Dynamics Algorithm

In this work, we unify several expected generalization error bounds base...
research
07/01/2022

On Leave-One-Out Conditional Mutual Information For Generalization

We derive information theoretic generalization bounds for supervised lea...

Please sign up or login with your details

Forgot password? Click here to reset